From nginx-forum at forum.nginx.org Thu Sep 1 01:17:32 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 31 Aug 2016 21:17:32 -0400 Subject: Nginx multiple upstream map conditions In-Reply-To: <280fd0b1018a1e7f7c2fa77767e84c52.NginxMailingListEnglish@forum.nginx.org> References: <20160831215722.GZ12280@daoine.org> <280fd0b1018a1e7f7c2fa77767e84c52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <902e81330ae127972f7ea733eb282bdf.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > Francis Daly Wrote: > ------------------------------------------------------- > > On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote: > > > > Hi there, > > > > > Thanks works a treat is it possible or allowed to do the > following > > in a > > > nginx upstream map ? and if so how i can't figure it out. > > > > I think it is logically impossible. > > > > > I cache with the following key. > > > fastcgi_cache_key > > > "$session_id_value$scheme$host$request_uri$request_method"; > > > > fastcgi_cache_key is the thing that nginx calculates from the > request, > > before it decides whether to send the response from cache, or > whether > > to pass the request to upstream. > > > > > if the upstream_cookie_logged_in value is not equal to 1 how can > I > > set > > > $session_id_value ''; make empty > > > > $upstream_cookie_something is part of the response from upstream, > > so is not available to nginx at the time that it is calculating > > fastcgi_cache_key for the "read from cache or not" decision. > > > > Am I missing something? > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > Thanks :) so changes to that value will have no effect. > > What about the following scenario. > > I remove all Set-Cookie headers. > fastcgi_hide_header Set-Cookie; > > > Then add them back in with : > add_header Set-Cookie "$upstream_http_set_cookie"; > > Will requests that get a cache hit ever contain a Set-Cookie header or > isit only the ones that reach the origin php server. > > From my tests it appears to be working that no set-cookie headers are > present on "X-Cache-Status : HIT" headers. With : fastcgi_hide_header Set-Cookie; I think i should allow myself to add set-cookie headers to fresh origin requests like this. map $upstream_cache_status $upstream_value_status { ~MISS $upstream_http_set_cookie; ~BYPASS $upstream_http_set_cookie; ~EXPIRED $upstream_http_set_cookie; } add_header Set-Cookie $upstream_value_status; I have not tested this yet though. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269330#msg-269330 From gwenoleg at alinket.com Thu Sep 1 06:58:50 2016 From: gwenoleg at alinket.com (Gwenole Gendrot) Date: Thu, 1 Sep 2016 14:58:50 +0800 Subject: UDP load balancing - 1 new socket for each incoming packet? Message-ID: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com> Hi, I've been using nginx 1.11.3 to test the UDP load balancing feature, using a "basic" configuration. The functionality is working out of the box, but a new socket will be created by the proxy for each packet sent from a client (to the same connection). This leads to resource exhaustion under heavy load (even with only 1 client / 1 server). My question: is it the intended behaviour to open a new socket for each incoming packet? - if no => is this a bug? some misconfiguration from my part (either in nginx or Linux)? has anyone observed this behaviour? - if yes => is reusing the socket for the same connection a missing feature / future improvement? Tx! Gwn P.S.: my current workaround is to set the proxy timeout to a very low value and increase the maximum number of concurrent connections & opened files/sockets. P.P.S: Logs were empty of warnings & errors. My coonfiguration (nothing fancy, pretty much all the system & SW are from a fresh install) as attachment. BR, Gwenole Gendrot 156 1835 3270 -------------- next part -------------- $ uname -a Linux AiDMS 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux $ $ nginx -v nginx version: nginx/1.11.3 $ $ cat /etc/nginx/streams-available/udp_balancing_test.conf stream { upstream udp_cluster { # hash $remote_addr consistent; hash $remote_addr; server 127.0.0.1:17000; server 127.0.0.1:17001; server 127.0.0.1:17002; server 127.0.0.1:17003; } server { # listen 0.0.0.0:16583 udp reuseport; listen 0.0.0.0:16583 udp; #UDP traffic will be proxied to the "udp_cluster" upstream group proxy_pass udp_cluster; # proxy_buffer_size 1024k; proxy_timeout 5s; } } $ $ cat /etc/nginx/nginx.conf user nginx; # worker_processes 1; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 8192; # use epoll; # multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; # remove default http configuration #include /etc/nginx/conf.d/*.conf; } include streams-enabled/*.conf; From arut at nginx.com Thu Sep 1 08:02:09 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 1 Sep 2016 11:02:09 +0300 Subject: UDP load balancing - 1 new socket for each incoming packet? In-Reply-To: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com> References: <53938119-d29a-4dec-9b94-816be78a59d8@alinket.com> Message-ID: <20160901080209.GZ55147@Romans-MacBook-Air.local> Hello, On Thu, Sep 01, 2016 at 02:58:50PM +0800, Gwenole Gendrot wrote: > Hi, > > > I've been using nginx 1.11.3 to test the UDP load balancing feature, using a > "basic" configuration. > The functionality is working out of the box, but a new socket will be > created by the proxy for each packet sent from a client (to the same > connection). This leads to resource exhaustion under heavy load (even with > only 1 client / 1 server). > > My question: is it the intended behaviour to open a new socket for each > incoming packet? Yes, a new socket is created for an incoming UDP datagram to proxy it to the upstream server and to proxy the response datagram(s) back to client. > - if no => is this a bug? some misconfiguration from my part (either in > nginx or Linux)? has anyone observed this behaviour? > - if yes => is reusing the socket for the same connection a missing feature > / future improvement? Datagrams sent from the same client are not considered as a part of a single connection. In fact, they can even be received by different nginx workers. And yes, this is a subject for the future improvement. > > > Tx! > Gwn > > > P.S.: my current workaround is to set the proxy timeout to a very low value > and increase the maximum number of concurrent connections & opened > files/sockets. If you know in advance how many datagrams you are expecting in response to a single client datagram, you can use the proxy_responses directive to set it. In this case nginx will close the session (and release the socket) once the required number of datagrams is sent back to client. > P.P.S: Logs were empty of warnings & errors. My coonfiguration (nothing > fancy, pretty much all the system & SW are from a fresh install) as > attachment. > > BR, > > Gwenole Gendrot > 156 1835 3270 [..] -- Roman Arutyunyan From brentgclarklist at gmail.com Thu Sep 1 08:43:55 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Thu, 1 Sep 2016 10:43:55 +0200 Subject: Help understanding rate limiting log entry Message-ID: Good day Guys I just implemented rate limiting. Could someone please explain what *42450 is / means 109154#109154 is / means and what also 10.195 (I take it 10 is the size my bucket, but its the 195 I dont understand) Here is the log entry: 2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests, excess: 10.195 by zone "req_limit_per_ip", client: 54.237.120.210, server: default, request: "GET Many thanks Brent From lukasz at tasz.eu Thu Sep 1 11:34:39 2016 From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=) Date: Thu, 1 Sep 2016 13:34:39 +0200 Subject: nginx with caching Message-ID: Hi all, since some time I'm using nginx as reverse proxy with caching for serving images files. looks pretty good since proxy is located per each location. but I noticed problematic behaviour, when cache is empty, and there will pop-up a lot of requests at the same time, nginx don't understand that all request are same, and will fetch from upstream only onece and serve it to the rest, but all requests are handovered to upstream. side effects? - upstream server limit rate since there is to much connections to one client, - in some cases there are issues with temp - not enough space to finish all requests any ideas? is it known problem? I know that problem can be solved with warming up caches, but since there is a lot of locations, I would like to keep it transparent. regards ?ukasz Tasz RTKW -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 1 13:25:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Sep 2016 16:25:19 +0300 Subject: Help understanding rate limiting log entry In-Reply-To: References: Message-ID: <20160901132519.GU1855@mdounin.ru> Hello! On Thu, Sep 01, 2016 at 10:43:55AM +0200, Brent Clark wrote: > I just implemented rate limiting. > > Could someone please explain what > > *42450 is / means This is a connection number, also available as $connection. > 109154#109154 is / means This is nginx worker PID (also available as $pid) and thread identifier. > and what also > > 10.195 (I take it 10 is the size my bucket, but its the 195 I dont > understand) This is number of requests acumulated in the bucket. If this number is more than burst defined (10 in your case), further request will be rejected. Number of requests in the bucket is reduced according to the rate defined and current time, and may not be integer. The ".195" means that an additional request will be allowed in about 195 milliseconds assuming rate 1r/s. > Here is the log entry: > > 2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests, excess: > 10.195 by zone "req_limit_per_ip", client: 54.237.120.210, server: default, > request: "GET -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 1 13:31:41 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Sep 2016 16:31:41 +0300 Subject: nginx with caching In-Reply-To: References: Message-ID: <20160901133141.GV1855@mdounin.ru> Hello! On Thu, Sep 01, 2016 at 01:34:39PM +0200, ?ukasz Tasz wrote: > Hi all, > since some time I'm using nginx as reverse proxy with caching for serving > images files. > looks pretty good since proxy is located per each location. > > but I noticed problematic behaviour, when cache is empty, and there will > pop-up a lot of requests at the same time, nginx don't understand that all > request are same, and will fetch from upstream only onece and serve it to > the rest, but all requests are handovered to upstream. > side effects? > - upstream server limit rate since there is to much connections to one > client, > - in some cases there are issues with temp - not enough space to finish all > requests > > any ideas? > is it known problem? > > I know that problem can be solved with warming up caches, but since there > is a lot of locations, I would like to keep it transparent. There is the proxy_cache_lock directive to address such use cases, see http://nginx.org/r/proxy_cache_lock. Additionally, for updating cache items there is "proxy_cache_use_stale updating", see http://nginx.org/r/proxy_cache_use_stale. -- Maxim Dounin http://nginx.org/ From brentgclarklist at gmail.com Thu Sep 1 14:01:53 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Thu, 1 Sep 2016 16:01:53 +0200 Subject: Help understanding rate limiting log entry In-Reply-To: <20160901132519.GU1855@mdounin.ru> References: <20160901132519.GU1855@mdounin.ru> Message-ID: <755557b8-4573-4eb1-9ec8-e81d881ae598@gmail.com> Good day Maxim Thank you for taking the time in your explanation. Regards Brent Clark On 01/09/2016 15:25, Maxim Dounin wrote: > Hello! > > On Thu, Sep 01, 2016 at 10:43:55AM +0200, Brent Clark wrote: > >> I just implemented rate limiting. >> >> Could someone please explain what >> >> *42450 is / means > This is a connection number, also available as $connection. > >> 109154#109154 is / means > This is nginx worker PID (also available as $pid) and thread identifier. > >> and what also >> >> 10.195 (I take it 10 is the size my bucket, but its the 195 I dont >> understand) > This is number of requests acumulated in the bucket. If this > number is more than burst defined (10 in your case), further > request will be rejected. > > Number of requests in the bucket is reduced according to the rate > defined and current time, and may not be integer. The ".195" > means that an additional request will be allowed in about 195 > milliseconds assuming rate 1r/s. > >> Here is the log entry: >> >> 2016/09/01 10:06:29 [error] 109154#109154: *42450 limiting requests, excess: >> 10.195 by zone "req_limit_per_ip", client: 54.237.120.210, server: default, >> request: "GET From lukasz at tasz.eu Thu Sep 1 14:06:07 2016 From: lukasz at tasz.eu (=?UTF-8?B?xYF1a2FzeiBUYXN6?=) Date: Thu, 1 Sep 2016 16:06:07 +0200 Subject: nginx with caching In-Reply-To: <20160901133141.GV1855@mdounin.ru> References: <20160901133141.GV1855@mdounin.ru> Message-ID: looks like something what I'm looking for! thanks a lot, starting my tests br L. ?ukasz Tasz RTKW 2016-09-01 15:31 GMT+02:00 Maxim Dounin : > Hello! > > On Thu, Sep 01, 2016 at 01:34:39PM +0200, ?ukasz Tasz wrote: > > > Hi all, > > since some time I'm using nginx as reverse proxy with caching for serving > > images files. > > looks pretty good since proxy is located per each location. > > > > but I noticed problematic behaviour, when cache is empty, and there will > > pop-up a lot of requests at the same time, nginx don't understand that > all > > request are same, and will fetch from upstream only onece and serve it to > > the rest, but all requests are handovered to upstream. > > side effects? > > - upstream server limit rate since there is to much connections to one > > client, > > - in some cases there are issues with temp - not enough space to finish > all > > requests > > > > any ideas? > > is it known problem? > > > > I know that problem can be solved with warming up caches, but since there > > is a lot of locations, I would like to keep it transparent. > > There is the proxy_cache_lock directive to address such use cases, > see http://nginx.org/r/proxy_cache_lock. > > Additionally, for updating cache items there is > "proxy_cache_use_stale updating", see > http://nginx.org/r/proxy_cache_use_stale. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 2 02:47:54 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Thu, 01 Sep 2016 22:47:54 -0400 Subject: how to completely disable request body buffering In-Reply-To: References: Message-ID: Hi B.R Please find the nginx confiuration below that we are using. and any help would be greatful. nginx -V ================= nginx version: nginx/1.8.0 built with OpenSSL 1.0.2h-fips 3 May 2016 TLS SNI support enabled configure arguments: --crossbuild=Linux::arm --with-cc=arm-linux-gnueabihf-gcc --with-cpp=arm-linux-gnueabihf-gcc --with-cc-opt='-pipe -Os -gdwarf-4 -mfpu=neon --sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot' --with-ld-opt=--sysroot=/work/autobuild/project_hub_release/nginx/service/001.1635A/sol_aux_build/sbq_sysroot --prefix=/usr --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/tmp/nginx/client-body --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-scgi-temp-path=/var/tmp/nginx/scgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --user=www-data --group=www-data --with-ipv6 --with-http_ssl_module --with-http_gzip_static_module --with-debug nginx.conf ================= worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; listen [::]:80; listen 8080; listen [::]:8080; listen 127.0.0.1:14200; #usb port listen 443 ssl; listen [::]:443 ssl; listen 127.0.0.1:14199; # for internal LEDM requests to bypass authentication check listen 127.0.0.1:6015; # websocket internal port to talk to nginx. server_name localhost; include /project/ram/secutils/*.conf; include /project/rom/httpmgr_nginx/*.conf; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } proj_server.conf: ================= server { listen [::]:5678 ssl ipv6only=off; ssl_certificate /project/rw/cert_svc/dev_cert.pem; ssl_certificate_key /mnt/encfs/cert_svc/dev_key.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; gzip on; gzip_types *; gzip_min_length 0; # If the incoming request body is greater than client_max_body_size, # NGINX will return 413 request entitiy too large. # Setting to 0 will disable this size check. client_max_body_size 0; # By default, NGINX will try to buffer up the entire request body before # sending it to the backend server. # Turning it off should stop this behavior and pass the request on immediately. fastcgi_request_buffering off; # By default, NGINX will try to buffer up the entire response before # sending it to the client. # Turning it off should stop this behavior and pass the response on immediately. fastcgi_buffering off; # Default timeout is 60s and there is no way to disable the read timeout. # If a read has not been performed in the specified interval # a 504 response is sent from NGINX to the client. # This could happen if there is a flow stoppage in the upstream. fastcgi_read_timeout 7d; # Default timeout is 60s and there is no way to disable the send timeout. # If NGINX has not sent data to the FastCGI server in the specified interval # a 504 response is sent from NGINX to the client. # This could happen if there is a flow stoppage in the upstream. fastcgi_send_timeout 7d; # This server's listen directive says to use SSL on port 5678. # When HTTP requests come to an SSL port NGINX throws a 497 HTTP Request Sent to HTTPS Port # Since our requests will be HTTP on port 5678, NGINX will throw error code 497 # To fix this, when NGINX throws 497, we tell it to use the status code # from the upstream server. error_page 497 = $request_uri; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param HOST $host; include fastcgi_params; location = /path/to/resource1 { fastcgi_pass 127.0.0.1:14052; } location = /path/to/resource2 { fastcgi_pass 127.0.0.1:14052; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269196,269353#msg-269353 From brentgclarklist at gmail.com Fri Sep 2 11:43:00 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Fri, 2 Sep 2016 13:43:00 +0200 Subject: Nginx to real time minifying Message-ID: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com> Good day Guys I heard companies like cloudflare have an option for minifying on their proxies. I would like to ask, is there such a feature for nginx. Is there a third party module? Many thanks Brent From pablo.platt at gmail.com Fri Sep 2 11:51:01 2016 From: pablo.platt at gmail.com (pablo platt) Date: Fri, 2 Sep 2016 14:51:01 +0300 Subject: Nginx to real time minifying In-Reply-To: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com> References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com> Message-ID: The gzip module compresses in realtime (uses the CPU): http://nginx.org/en/docs/http/ngx_http_gzip_module.html The gzip_static module use existing compressed files: http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html On Fri, Sep 2, 2016 at 2:43 PM, Brent Clark wrote: > Good day Guys > > I heard companies like cloudflare have an option for minifying on their > proxies. > > I would like to ask, is there such a feature for nginx. > > Is there a third party module? > > Many thanks > > Brent > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Fri Sep 2 12:14:55 2016 From: miguelmclara at gmail.com (Miguel C) Date: Fri, 2 Sep 2016 13:14:55 +0100 Subject: Nginx to real time minifying In-Reply-To: References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com> Message-ID: Maybe this: https://github.com/mrclay/minify/blob/2.x/README.md Note that I never used in in production, since I run mostly WP sites, plug-ins worked best so far. One awesome alternative is ngx-pagespeed it's a pity it's not supported on FreeBSD though but on Linux server pagespeed will handle that and much more and with the corrected configuration for your site (u might need to play with it for a while) it delivers the best results, and includes nice things like auto resizing for different view ports making things much faster on mobile :) -- Miguel Clara, Sent from Gmail Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.platt at gmail.com Fri Sep 2 12:19:49 2016 From: pablo.platt at gmail.com (pablo platt) Date: Fri, 2 Sep 2016 15:19:49 +0300 Subject: Nginx to real time minifying In-Reply-To: References: <7d2794dc-961a-4e92-6c2b-a29ed68489fa@gmail.com> Message-ID: There is also google pagespeed (didn't use it) https://developers.google.com/speed/pagespeed/module/ https://github.com/pagespeed/ngx_pagespeed On Fri, Sep 2, 2016 at 3:14 PM, Miguel C wrote: > Maybe this: https://github.com/mrclay/minify/blob/2.x/README.md > > Note that I never used in in production, since I run mostly WP sites, > plug-ins worked best so far. > > One awesome alternative is ngx-pagespeed it's a pity it's not supported on > FreeBSD though but on Linux server pagespeed will handle that and much more > and with the corrected configuration for your site (u might need to play > with it for a while) it delivers the best results, and includes nice things > like auto resizing for different view ports making things much faster on > mobile :) > > > > > -- > Miguel Clara, > Sent from Gmail Mobile > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 2 12:49:13 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 02 Sep 2016 08:49:13 -0400 Subject: pcre.org down? Message-ID: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> Anyone any idea what happened to www.pcre.org ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269359#msg-269359 From anoopalias01 at gmail.com Fri Sep 2 15:04:16 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 2 Sep 2016 20:34:16 +0530 Subject: pcre.org down? In-Reply-To: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> Message-ID: ; Received 682 bytes from 192.228.79.201#53(b.root-servers.net) in 405 ms pcre.org. 86400 IN NS ns.figure1.net. pcre.org. 86400 IN NS monid01.nebcorp.com. pcre.org. 86400 IN NS meow.raye.com. pcre.org. 86400 IN NS koffing.ivysaur.com. h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN NSEC3 1 1 1 D399EAAB H9PARR669T6U8O1GSG9E1LMITK4DEM0T NS SOA RRSIG DNSKEY NSEC3PARAM h9p7u7tr2u91d0v0ljs9l1gidnp90u3h.org. 86400 IN RRSIG NSEC3 7 2 86400 20160923150233 20160902140233 48497 org. EBTmSR2rCyGj0HzJr5zL5uMIWD6K7inbPUctZ4iWRKfpQjOy02jW+ETu psvQCa3dtWGGWUfTM820sMbsG7Uue3BX+/2Utrq0lB0XAcL/Z/p9Fwra h2W8fKHOMyy+6TimoR45A7PnLwqLdLLhY03ISp9pcd7WTGJQ/V/0M5nO Ss8= jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN NSEC3 1 1 1 D399EAAB JNRF2EBH2M0FOJG163S5KVHSBO31O5RF NS DS RRSIG jnqfik42o561r7a65jpdqln7gouvgjbs.org. 86400 IN RRSIG NSEC3 7 2 86400 20160923095353 20160902085353 48497 org. Zt8KcXmYsykQQV1hnF3X012jXqorxh8Hj4X12HzQftD/U/CmH03x925I rvRSY4wYXzlNaHyJ5vDTeYzAG9TIdxG66RDHeOwn3HRGqht2u14oc+sE pNbYm/cE2ozbf4ohQ0VBT3ma5UInu6ATU9pkJ1nOldYW+LtmPY4/MYFJ DVs= couldn't get address for 'monid01.nebcorp.com': failure ;; Received 645 bytes from 199.19.57.1#53(d0.org.afilias-nst.org) in 435 ms ;; Received 37 bytes from 66.93.34.236#53(ns.figure1.net) in 329 ms On Fri, Sep 2, 2016 at 6:19 PM, itpp2012 wrote: > Anyone any idea what happened to www.pcre.org ? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269359#msg-269359 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias From nginx-forum at forum.nginx.org Fri Sep 2 18:30:27 2016 From: nginx-forum at forum.nginx.org (erankor) Date: Fri, 02 Sep 2016 14:30:27 -0400 Subject: Cancelling aio operations on Linux Message-ID: <6a5409e671edb87db45a5e19da0e183f.NginxMailingListEnglish@forum.nginx.org> Hi, Recently while reloading/restarting nginx I've been getting errors such as: 2016/09/02 11:13:44 [alert] 16480#16480: *1234 open socket #123 left in connection 123 After setting `debug_points abort` and checking the core dump, I found that all requests were blocked on file aio (they had r->blocked and r->aio both set to 1) I then looked at the nginx source and saw this comment: /* * FreeBSD file AIO features and quirks: .... * aio_cancel() cannot cancel file AIO: it returns AIO_NOTCANCELED always. */ My question is - from your knowledge, does aio_cancel work correctly on Linux ? If so, can you provide some high level guidance for implementing it ? Btw, it is clear that there is some problem with the storage that makes aio read operations hang forever, and cancelling them isn't the ideal solution, but that will at least prevent them from having a cumulative negative effect on the server. Thank you ! Eran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269366,269366#msg-269366 From nginx-forum at forum.nginx.org Sat Sep 3 13:09:19 2016 From: nginx-forum at forum.nginx.org (mastercan) Date: Sat, 03 Sep 2016 09:09:19 -0400 Subject: Multi Certificate Support with OCSP not working right Message-ID: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org> Hello, When using 2 certificates, 1 RSA (using AlphaSSL) and 1 ECDSA (using Lets Encrypt), and I try to connect via RSA SSL connection, nginx throws this error: "OCSP response not successful (6: unauthorized) while requesting certificate status, responder: ocsp.int-x3.letsencrypt.org" So it is using the wrong responder. Following build (custom compiled): Nginx 1.11.3 Openssl 1.1.0 AFAIK OpenSSL 1.1.0 should support multiple certificate chains. I don't quite understand why OCSP then is not working right? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269371,269371#msg-269371 From wplatnick at gmail.com Sun Sep 4 00:06:09 2016 From: wplatnick at gmail.com (Will Platnick) Date: Sun, 04 Sep 2016 00:06:09 +0000 Subject: Server very delayed in sending SYN/ACK Message-ID: Hello, I have run into a very interesting issue. I am replacing a set of nginx reverse proxy servers with a new set running an updated OS/nginx. These nginx servers front a popular API that's mostly used by mobile apps, but also a website that's hosted on a nearby subnet. I put the new servers into service last night, and this morning as traffic picked up (only a couple thousand requests per second), I got alerts from my DNS provider that requests to the new server were starting to timeout in the Connect phase. I hopped into New Relic, and I could see tons of requests from my website to the nginx reverse proxy timing out after it hit our limit of 10s. I did some curl requests with timing information, and I could see long times only in the time_connect level, confirming the issue was only in the connection phase. I hopped on the new nginx server and started a packet capture filtered to a machine on a nearby subnet, did the curl from there, got it taking a 9+ seconds in the connect phase, stopped the packet capture, and moved the traffic over to my old setup. No issues over there. Here's everything I know/think is relevant: * In the packet capture from the server, I see the SYN packet come in, then 3 more retransmits of that same syn come in before the server sent back the SYN/ACK. To me this indicates the issue in kernel or nginx side. * There's absolutely no slowdown in the backends as measured from the same nginx server. * There's nothing in the nginx error log * There's nothing from the kernel in dmesg when this is happening * NIC duplex is fine, no dropped queues from ethtool -S (but, again, it doesn't seem like a networking issue, we got the SYNs just fine, we just didn't send the syn/ack) * I tried to artificially load test afterwords using ab and loader.io, doing 3x as many requests, but couldn't replicate the issue. I'm not sure if it's some weird issue due to misbehaving mobile clients and SSL filling up some sort of queue, but whatever it is, I can't replicate the issue on demand. * Load on the box was fine (<4) and no crazy I/O. * Keepalives were turned on * Some relevant sysctl values: cat /proc/sys/net/core/somaxconn (backlog is set to the same in the nginx config) 16384 cat /proc/sys/net/core/netdev_max_backlog 15000 cat /proc/sys/net/ipv4/tcp_max_syn_backlog 262144 NGINX: 1.11.3 OS: Ubuntu 16.04.1 x64 Kernel: 4.4.0-36-generic It seems to me the issue is at the kernel/app level, but I can't think of where to go from here. If anybody has any ideas for me try, or if I've forgotten to mention something relevant, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krishna.ku at flipkart.com Sun Sep 4 02:49:51 2016 From: krishna.ku at flipkart.com (Krishna Kumar (Engineering)) Date: Sun, 4 Sep 2016 08:19:51 +0530 Subject: Server very delayed in sending SYN/ACK In-Reply-To: References: Message-ID: Hi Will, > * In the packet capture from the server, I see the SYN packet come in, then 3 more retransmits of that same syn come in before the server sent back the SYN/ACK. To me this indicates the issue in kernel or nginx side. Many times clients sends multiple SYN's. You can run wireshark to check the time stamps. If the packets are very close (in milliseconds), that is normal, else you have a problem on the server. nginx does not come into the picture during TCP handshake, it's job is done when nginx indicates that this socket is ready to accept connection using the listen() system call. Once the final ack is done, the connection is ready and if an accept() is called, it will succeed (as in does-not-block). However, the client would get success on connect() at the time the TCP handshake finished, not when the application finished the accept() call. Maybe attaching tcpdump will be useful for someone to take a look at what is wrong. Are the initial packets being dropped at the kernel due to bad checksums? Do you have any IPTable rules that might drop syn's or rate limit it? Do you see retransmissions (netstat -s)? Maybe you can run netstat -s before and after to see which counters increase and derive some clues from that? On Sun, Sep 4, 2016 at 5:36 AM, Will Platnick wrote: > Hello, > I have run into a very interesting issue. I am replacing a set of nginx > reverse proxy servers with a new set running an updated OS/nginx. These > nginx servers front a popular API that's mostly used by mobile apps, but > also a website that's hosted on a nearby subnet. I put the new servers into > service last night, and this morning as traffic picked up (only a couple > thousand requests per second), I got alerts from my DNS provider that > requests to the new server were starting to timeout in the Connect phase. > I hopped into New Relic, and I could see tons of requests from my website > to the nginx reverse proxy timing out after it hit our limit of 10s. I did > some curl requests with timing information, and I could see long times only > in the time_connect level, confirming the issue was only in the connection > phase. I hopped on the new nginx server and started a packet capture > filtered to a machine on a nearby subnet, did the curl from there, got it > taking a 9+ seconds in the connect phase, stopped the packet capture, and > moved the traffic over to my old setup. No issues over there. > > Here's everything I know/think is relevant: > > * In the packet capture from the server, I see the SYN packet come in, > then 3 more retransmits of that same syn come in before the server sent > back the SYN/ACK. To me this indicates the issue in kernel or nginx side. > > * There's absolutely no slowdown in the backends as measured from the same > nginx server. > > * There's nothing in the nginx error log > > * There's nothing from the kernel in dmesg when this is happening > > * NIC duplex is fine, no dropped queues from ethtool -S (but, again, it > doesn't seem like a networking issue, we got the SYNs just fine, we just > didn't send the syn/ack) > > * I tried to artificially load test afterwords using ab and loader.io, > doing 3x as many requests, but couldn't replicate the issue. I'm not sure > if it's some weird issue due to misbehaving mobile clients and SSL filling > up some sort of queue, but whatever it is, I can't replicate the issue on > demand. > > * Load on the box was fine (<4) and no crazy I/O. > > * Keepalives were turned on > > * Some relevant sysctl values: > > cat /proc/sys/net/core/somaxconn (backlog is set to the same in the nginx > config) > 16384 > > cat /proc/sys/net/core/netdev_max_backlog > 15000 > > cat /proc/sys/net/ipv4/tcp_max_syn_backlog > 262144 > > NGINX: 1.11.3 > OS: Ubuntu 16.04.1 x64 > Kernel: 4.4.0-36-generic > > It seems to me the issue is at the kernel/app level, but I can't think of > where to go from here. > > If anybody has any ideas for me try, or if I've forgotten to mention > something relevant, please let me know. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Sep 4 09:10:11 2016 From: nginx-forum at forum.nginx.org (squonk) Date: Sun, 04 Sep 2016 05:10:11 -0400 Subject: reverse proxy with TLS termination and DNS lookup Message-ID: <01759b35e5be8efcf63b74b9c4ef3f8d.NginxMailingListEnglish@forum.nginx.org> hi all.. I am trying to configure a reverse proxy which redirects a URL of the form: https://mydomain.com/myapp/abcd/... to: http://myapp:5100/abcd/... with DNS resolution of "myapp" to an IP address at runtime. My current configuration file is: server{ listen 80 default_server; server_name mydomain.com; return 301 https://www.mydomain.com$request_uri; } server{ listen 443 ssl default_server; server_name mydomain.com; ; resolver 123.4.5.6 valid=60s; // DNS name server.. 'nslookup myapp' does work set app_upstream http://myapp:5100; location /myapp/ { rewrite ^/myapp/(.*) /$1 break; proxy_pass $app_upstream; } } When i try: https://mydomain.com/myapp/ it resolves to: http://myapp/ but the log shows that the port isn't appended. I would prefer it if the caller didn't have to know the port. I could iterate, but don't have enough experience to say whether the overall approach is consistent with Nginx best practice and i need to proxy servers other than myapp so any feedback would be appreciated. thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269375,269375#msg-269375 From nginx-forum at forum.nginx.org Sun Sep 4 10:49:35 2016 From: nginx-forum at forum.nginx.org (George) Date: Sun, 04 Sep 2016 06:49:35 -0400 Subject: pcre.org down? In-Reply-To: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> Message-ID: yeah ran into the same problem and still seems to be down right now Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269379#msg-269379 From nginx-forum at forum.nginx.org Sun Sep 4 10:50:30 2016 From: nginx-forum at forum.nginx.org (NuLL3rr0r) Date: Sun, 04 Sep 2016 06:50:30 -0400 Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate? In-Reply-To: <20160829104910.GD1855@mdounin.ru> References: <20160829104910.GD1855@mdounin.ru> Message-ID: <69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org> Tahnk you Maxim for the answer and sorry for my tardy response. I'm sure that's not the case since I have a server block with redirect to www. Here is the actual config: server { server_tokens off; listen 80; listen [::]:80; server_name learnmyway.net; location / { return 301 https://www.$server_name$request_uri; # enforce https / www } # Error Pages include /path/to/snippets/error; # Anti-DDoS include /path/to/snippets/anti-ddos; # letsencrypt acme challenges include /path/to/snippets/letsencrypt-acme-challenge; } server { server_tokens off; listen 80; listen [::]:80; server_name *.learnmyway.net; location / { return 301 https://$host$request_uri; # enforce https } # Error Pages include /path/to/snippets/error; # Anti-DDoS include /path/to/snippets/anti-ddos; # letsencrypt acme challenges include /path/to/snippets/letsencrypt-acme-challenge; } server { server_tokens off; listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.learnmyway.net; # Hardened SSL include /path/to/snippets/hardened-ssl; ssl_certificate /path/to/certs/learnmyway.net.pem; ssl_certificate_key /path/to/keys/learnmyway.net.pem; ssl_trusted_certificate /path/to/certs/learnmyway.net.pem; #error_log /path/to/learnmyway.net/log/www_error_log; #access_log /path/to/learnmyway.net/log/www_access_log; root /path/to/learnmyway.net/www/; index index.html; # Error Pages include /path/to/snippets/error; # Anti-DDoS include /path/to/snippets/anti-ddos; # letsencrypt acme challenges include /path/to/snippets/letsencrypt-acme-challenge; # Compression include /path/to/snippets/compression; # Static Resource Caching include /path/to/snippets/static-resource-caching; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269263,269380#msg-269380 From nginx-forum at forum.nginx.org Sun Sep 4 11:07:28 2016 From: nginx-forum at forum.nginx.org (NuLL3rr0r) Date: Sun, 04 Sep 2016 07:07:28 -0400 Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate? In-Reply-To: <69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org> References: <20160829104910.GD1855@mdounin.ru> <69501fc7dcd85ebd69554d5234a78b89.NginxMailingListEnglish@forum.nginx.org> Message-ID: <172e70cce4fe587e53ca39fcf9a86547.NginxMailingListEnglish@forum.nginx.org> Ops! Thank you so much Maxim. You are right! Reading your response again, I just figured it out. Adding the following block solved the issue: server { server_tokens off; listen 443 ssl http2; listen [::]:443 ssl http2; server_name learnmyway.net; # Hardened SSL include /path/to/snippets/hardened-ssl; ssl_certificate /path/to/certs/learnmyway.net.pem; ssl_certificate_key /path/to/keys/learnmyway.net.pem; ssl_trusted_certificate /path/to/certs/learnmyway.net.pem; return 301 https://www.$server_name$request_uri; # enforce www } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269263,269381#msg-269381 From brentgclarklist at gmail.com Tue Sep 6 06:46:25 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Tue, 6 Sep 2016 08:46:25 +0200 Subject: curl -I says X-Cache-Status: MISS Message-ID: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com> Good day Guys Im trying to get to grips and understand caching. So I can see nginx caching wonderfully (all in all everything is working), but for my own understanding and peeking under the hood. I just decided to see what inside one of the cache files and one of the things that got my attention was cat /storage/imgcache/2/05/f8484e99c2d4e7659020a0fa96a22052 KEY: httpGETdomain.com/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3 So here now is my question, if I do : bclark at bclark:~$ curl -I http://domain/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3 HTTP/1.1 200 OK Server: nginx Date: Tue, 06 Sep 2016 06:34:23 GMT Content-Type: image/jpeg Content-Length: 9052 Connection: keep-alive Last-Modified: Mon, 05 Sep 2016 15:22:03 GMT ETag: "235c-53bc43e026b87" Cache-Control: max-age=31536000, public Expires: Wed, 06 Sep 2017 06:34:23 GMT Vary: User-Agent Pragma: public X-Powered-By: W3 Total Cache/0.9.4.1 X-Cache-Status: MISS Accept-Ranges: bytes See the X-Cache-Status. Does anyone know why it says MISS? If I run the same curl command again, it says HIT. Many thanks Brent From medvedev.yp at gmail.com Tue Sep 6 06:58:39 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Tue, 06 Sep 2016 09:58:39 +0300 Subject: curl -I says X-Cache-Status: MISS References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com> Message-ID: Miss status, it's file should be caching in next request. It's normal for file not in cache. ?????????? ? ????? ASUS -------- ???????? ????????? -------- ???????????:Brent Clark ????????????:Tue, 06 Sep 2016 09:46:25 +0300 ??????????:nginx at nginx.org ????:curl -I says X-Cache-Status: MISS >Good day Guys > >Im trying to get to grips and understand caching. > >So I can see nginx caching wonderfully (all in all everything is >working), but for my own understanding and peeking under the hood. > >I just decided to see what inside one of the cache files and one of the >things that got my attention was > >cat /storage/imgcache/2/05/f8484e99c2d4e7659020a0fa96a22052 > >KEY: >httpGETdomain.com/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3 > > >So here now is my question, if I do : > >bclark at bclark:~$ curl -I >http://domain/wp-content/uploads/sites/18/2016/09/B15J5GB3364-168x112.jpg?5152e3 >HTTP/1.1 200 OK >Server: nginx >Date: Tue, 06 Sep 2016 06:34:23 GMT >Content-Type: image/jpeg >Content-Length: 9052 >Connection: keep-alive >Last-Modified: Mon, 05 Sep 2016 15:22:03 GMT >ETag: "235c-53bc43e026b87" >Cache-Control: max-age=31536000, public >Expires: Wed, 06 Sep 2017 06:34:23 GMT >Vary: User-Agent >Pragma: public >X-Powered-By: W3 Total Cache/0.9.4.1 >X-Cache-Status: MISS >Accept-Ranges: bytes > >See the X-Cache-Status. > >Does anyone know why it says MISS? > >If I run the same curl command again, it says HIT. > >Many thanks > >Brent > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 6 07:46:41 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 06 Sep 2016 03:46:41 -0400 Subject: curl -I says X-Cache-Status: MISS In-Reply-To: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com> References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com> Message-ID: <357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org> Brent Clark Wrote: ------------------------------------------------------- > Vary: User-Agent See https://forum.nginx.org/read.php?2,262943,262943#msg-262943 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269389,269391#msg-269391 From brentgclarklist at gmail.com Tue Sep 6 08:07:42 2016 From: brentgclarklist at gmail.com (Brent Clark) Date: Tue, 6 Sep 2016 10:07:42 +0200 Subject: curl -I says X-Cache-Status: MISS In-Reply-To: <357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org> References: <1786ac82-283e-c235-48b7-e2b616511bc6@gmail.com> <357da3d32d70eb4286fc9d7fd14038e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6cb80dbf-0889-eddd-ac53-3899bce92e76@gmail.com> Thank you so much. Regards Brent On 06/09/2016 09:46, itpp2012 wrote: > Brent Clark Wrote: > ------------------------------------------------------- >> Vary: User-Agent > See https://forum.nginx.org/read.php?2,262943,262943#msg-262943 > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269389,269391#msg-269391 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From atomyuk at gmail.com Tue Sep 6 08:25:58 2016 From: atomyuk at gmail.com (=?UTF-8?B?0JDRgNGC0ZHQvCDQotC+0LzRjtC6?=) Date: Tue, 6 Sep 2016 11:25:58 +0300 Subject: libluajit-5.1.so.2()(64bit) is needed by nginx-1.11.3-1.el6.ngx.x86_64 Message-ID: Hi all. I am trying to install nginx builded(via rpmbuild) with lua and luajit support, and getting error libluajit-5.1.so.2()(64bit) is needed by nginx-1.11.3-1.el6.ngx.x86_64. libluajit-5.1.so.2 is present in /usr/lib and /usr/lib64 but during install it some how doesn't visible. Maybe during build of nginx package i can point installed where to find this lib? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiel.mols at gmail.com Tue Sep 6 14:08:22 2016 From: emiel.mols at gmail.com (Emiel Mols) Date: Tue, 06 Sep 2016 14:08:22 +0000 Subject: keep-alive to backend + non-idempotent requests = race condition? In-Reply-To: References: Message-ID: Anyone? On Thu, Aug 25, 2016 at 3:44 PM Emiel Mols wrote: > Hey, > > I've been haunted by this for quite some time, seen it in different > deployments, and think might make for some good ol' mailing list discussion. > > When > > - using keep-alive connections to a backend service (eg php, rails, python) > - this backend needs to be updatable (it is not okay to have lingering > workers for hours or days) > - requests are often not idem-potent (can't repeat them) > > current deployments need to close the kept-alive connection from the > backend-side, always opening up a race condition where nginx has just sent > a request and the connection gets closed. This leaves nginx in limbo not > knowing if the request has been executed and can be repeated. > > When using keep-alive connections the only reliable way of closing them is > from the client-side (in this case: nginx). I would therefor expect either > > - a feature to signal nginx to close all connections to the backend after > having deployed new backend code. > > - an upstream keepAliveIdleTimeout config value that guarantees that > kept-alive connections are not left lingering indefinitely long. If nginx > guarantees it closes idle connections after 5 seconds, we can be sure that > 5s+max_request_time after a new backend is deployed all old workers are > gone. > > - (variant on the previous) support for a http header from the backend to > indicate such a timeout value. It's funny that this header kind-of already > exists in the spec < > https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#keep-alive > >, but in practice is implemented by no-one. > > The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind > implementing myself if someone versed in the architecture would give some > pointers. > > Best regards, > > - Emiel > BTW: a similar issue should exist between browsers and web servers. Since > latency is a lot higher on these links, I can only assume it to happen a > lot. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 7 09:27:27 2016 From: nginx-forum at forum.nginx.org (Kurogane) Date: Wed, 07 Sep 2016 05:27:27 -0400 Subject: Problem with SSL Message-ID: Hi, I've a problem with non ssl. I got this setup. domain1.com domain2.com SSL The certificate i not have issue all is fine here. The problem is when someone go to this https://domain1.com is show domain2.com content. How i can solve this issue? i have multi domain using same IP and all domains go to "SSL" if not the right SSL domain always show domain2.com content. What i want to archive if domain1.com not have setup SSL not load or load SSL cert error but in the same domain not the current SSL domain configurate. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269401#msg-269401 From medvedev.yp at gmail.com Wed Sep 7 09:29:42 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Wed, 7 Sep 2016 12:29:42 +0300 Subject: Problem with SSL In-Reply-To: References: Message-ID: Hi, you must use vhost configuration for domains. 2016-09-07 12:27 GMT+03:00 Kurogane : > Hi, > > I've a problem with non ssl. > > I got this setup. > > domain1.com > domain2.com SSL > > The certificate i not have issue all is fine here. The problem is when > someone go to this https://domain1.com is show domain2.com content. > > How i can solve this issue? i have multi domain using same IP and all > domains go to "SSL" if not the right SSL domain always show domain2.com > content. > > What i want to archive if domain1.com not have setup SSL not load or load > SSL cert error but in the same domain not the current SSL domain > configurate. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269401,269401#msg-269401 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 7 14:30:18 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 07 Sep 2016 10:30:18 -0400 Subject: emergency msg after changing cache path Message-ID: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org> Got this message after changing the cache path? Could not find a solution after googling it. Any help? [emerg] 15154#15154: cache "my_zone" uses the "/dev/shm/nginx" cache path while previously it used the "/tmp/nginx" cache path nginx -V nginx version: nginx/1.11.3 built with OpenSSL 1.0.2h 3 May 2016 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-auth-pam --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-cache-purge --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-dav-ext-module --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-echo --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/nginx-upstream-fair --add-module=/usr/local/src/nginx/nginx-1.11.3/debian/modules/ngx_http_substitutions_filter_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269405,269405#msg-269405 From mdounin at mdounin.ru Wed Sep 7 14:59:15 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 7 Sep 2016 17:59:15 +0300 Subject: emergency msg after changing cache path In-Reply-To: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org> References: <98acbb5b73564c76c64bf03065d48fe4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160907145915.GF86582@mdounin.ru> Hello! On Wed, Sep 07, 2016 at 10:30:18AM -0400, shiz wrote: > Got this message after changing the cache path? Could not find a solution > after googling it. Any help? > > [emerg] 15154#15154: cache "my_zone" uses the "/dev/shm/nginx" cache path > while previously it used the "/tmp/nginx" cache path You are trying to reload a configuration to an incompatible one, with a shared memory zone used for different cache. It's not something nginx is prepared to handle, so it refuses to reload the configuration. Available options are: - change the configuration to a compatible one (e.g., rename the cache zone so nginx will create a new one); - do a binary upgrade to start a new instance of nginx with only listening sockets inherited (see http://nginx.org/en/docs/control.html#upgrade, usually can be simplified to "service nginx upgrade"); - just restart nginx. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Sep 7 16:33:45 2016 From: nginx-forum at forum.nginx.org (Kurogane) Date: Wed, 07 Sep 2016 12:33:45 -0400 Subject: Problem with SSL In-Reply-To: References: Message-ID: Nginx is not suppose work in block/vhost? that is not the issue here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269413#msg-269413 From medvedev.yp at gmail.com Wed Sep 7 19:34:59 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Wed, 7 Sep 2016 22:34:59 +0300 Subject: Problem with SSL In-Reply-To: References: Message-ID: Can you show your configuration? 7 ????. 2016 ?. 19:33 ???????????? "Kurogane" ???????: > Nginx is not suppose work in block/vhost? that is not the issue here. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269401,269413#msg-269413 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 7 20:52:59 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 07 Sep 2016 16:52:59 -0400 Subject: emergency msg after changing cache path In-Reply-To: <20160907145915.GF86582@mdounin.ru> References: <20160907145915.GF86582@mdounin.ru> Message-ID: <9d1b6c88564dabf58cd7f302ff3ff2a0.NginxMailingListEnglish@forum.nginx.org> Interesting! Thank you so much! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269405,269415#msg-269415 From nginx-forum at forum.nginx.org Thu Sep 8 03:58:57 2016 From: nginx-forum at forum.nginx.org (Kurogane) Date: Wed, 07 Sep 2016 23:58:57 -0400 Subject: Problem with SSL In-Reply-To: References: Message-ID: <384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org> Domain 1 server { listen 80; server_name domain1.com; return 301 $scheme://www.$host$request_uri; } server { listen 80; server_name www.domain1.com; root /home/domain1/public_html; ... } Domain 2 (SSL) server { listen 80; server_name domain2.com; return 301 $scheme://www.$host$request_uri; } server { listen 80; server_name www.domain2.com; return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name www.domain2.com; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_certificate /home/nginx/ssl/domain2.com/domain2.com.crt; ssl_certificate_key /home/nginx/ssl/domain2.com/domain2.com.key; root /home/domain2/public_html; ... } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269417#msg-269417 From nhadie at gmail.com Thu Sep 8 04:01:27 2016 From: nhadie at gmail.com (ron ramos) Date: Thu, 8 Sep 2016 12:01:27 +0800 Subject: Problem with SSL In-Reply-To: <384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org> References: <384fc950add55eb859214d683f3ce5cb.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just add another server block on domain1 that listens to 443 ..and redirect it to http if you ..or just give an error On 8 Sep 2016 11:59 a.m., "Kurogane" wrote: > Domain 1 > > server { > listen 80; > server_name domain1.com; > return 301 $scheme://www.$host$request_uri; > } > > server { > listen 80; > server_name www.domain1.com; > root /home/domain1/public_html; > ... > } > > Domain 2 (SSL) > > server { > listen 80; > server_name domain2.com; > return 301 $scheme://www.$host$request_uri; > } > > server { > listen 80; > server_name www.domain2.com; > return 301 https://$host$request_uri; > } > > server { > listen 443 ssl http2; > server_name www.domain2.com; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_certificate /home/nginx/ssl/domain2.com/domain2.com.crt; > ssl_certificate_key /home/nginx/ssl/domain2.com/domain2.com.key; > root /home/domain2/public_html; > ... > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269401,269417#msg-269417 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiangmuhui at gmail.com Thu Sep 8 05:11:00 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Thu, 8 Sep 2016 13:11:00 +0800 Subject: nginx input Message-ID: Hi I am using program analysis to locate the bottleneck of nginx. I know the file nginx under the directory of nginx/sbin is the binary file. My question is that what is the input of the binary. I mean the format. Since a general URL doesn't seem to be a right input. Many Thanks Regards Muhui -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 8 05:19:19 2016 From: nginx-forum at forum.nginx.org (Kurogane) Date: Thu, 08 Sep 2016 01:19:19 -0400 Subject: Problem with SSL In-Reply-To: References: Message-ID: <0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org> I never thought about it very ingenious indeed. If there is another way to accomplish it let me know. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269401,269420#msg-269420 From medvedev.yp at gmail.com Thu Sep 8 05:36:14 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Thu, 8 Sep 2016 08:36:14 +0300 Subject: Problem with SSL In-Reply-To: <0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org> References: <0bd92558c0b4829390305e9c991967ee.NginxMailingListEnglish@forum.nginx.org> Message-ID: For all domain use ip+port. 8 ????. 2016 ?. 8:19 ???????????? "Kurogane" ???????: > I never thought about it very ingenious indeed. > > If there is another way to accomplish it let me know. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269401,269420#msg-269420 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Thu Sep 8 06:04:27 2016 From: oscaretu at gmail.com (oscaretu .) Date: Thu, 8 Sep 2016 08:04:27 +0200 Subject: nginx input In-Reply-To: References: Message-ID: Hello Do you know that NGINX needs a configuration file ? Kind regards, Oscar On Thu, Sep 8, 2016 at 7:11 AM, Muhui Jiang wrote: > Hi > > I am using program analysis to locate the bottleneck of nginx. I know the > file nginx under the directory of nginx/sbin is the binary file. My > question is that what is the input of the binary. I mean the format. Since > a general URL doesn't seem to be a right input. Many Thanks > > Regards > Muhui > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 8 08:53:47 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 08 Sep 2016 04:53:47 -0400 Subject: pcre.org down? In-Reply-To: References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Still down and so are many mirrors (broken or empty updates) Found one still working at https://fourdots.com/mirror/exim/exim-ftp/pcre/ Or get a copy here http://nginx-win.ecsds.eu/download/pcre-8.40-r1664-10-8-2016-svn-src.zip Both cross-platform sources. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269359,269423#msg-269423 From kurt at x64architecture.com Thu Sep 8 18:47:26 2016 From: kurt at x64architecture.com (Kurt Cancemi) Date: Thu, 8 Sep 2016 14:47:26 -0400 Subject: pcre.org down? In-Reply-To: References: <6839bfb6b6908a3b74b7a9c07e08091d.NginxMailingListEnglish@forum.nginx.org> Message-ID: It appears to have just come back online. -- Kurt Cancemi https://www.x64architecture.com From mdounin at mdounin.ru Thu Sep 8 21:11:37 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Sep 2016 00:11:37 +0300 Subject: Multi Certificate Support with OCSP not working right In-Reply-To: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org> References: <390a6995094152dee5bbabb945893b3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160908211137.GK86582@mdounin.ru> Hello! On Sat, Sep 03, 2016 at 09:09:19AM -0400, mastercan wrote: > When using 2 certificates, 1 RSA (using AlphaSSL) and 1 ECDSA (using Lets > Encrypt), and I try to connect via RSA SSL connection, nginx throws this > error: > > "OCSP response not successful (6: unauthorized) while requesting certificate > status, responder: ocsp.int-x3.letsencrypt.org" > > So it is using the wrong responder. > > Following build (custom compiled): > Nginx 1.11.3 > Openssl 1.1.0 > > AFAIK OpenSSL 1.1.0 should support multiple certificate chains. I don't > quite understand why OCSP then is not working right? It looks like there is a bug which prevents nginx from using different OCSP reponders when using OCSP stapling with multiple certificates. It uses the responder from the last certificate in the server{} block for all OCSP requests. Please try the following patch: # HG changeset patch # User Maxim Dounin # Date 1473367064 -10800 # Thu Sep 08 23:37:44 2016 +0300 # Node ID 2037cc64cdceb5b8cb36103cdd9d00e05b8e7ec3 # Parent 4a16fceea03bde6653e05d337e87907f085535b3 OCSP stapling: fixed using wrong responder with multiple certs. diff --git a/src/event/ngx_event_openssl_stapling.c b/src/event/ngx_event_openssl_stapling.c --- a/src/event/ngx_event_openssl_stapling.c +++ b/src/event/ngx_event_openssl_stapling.c @@ -376,6 +376,7 @@ ngx_ssl_stapling_responder(ngx_conf_t *c { ngx_url_t u; char *s; + ngx_str_t rsp; STACK_OF(OPENSSL_STRING) *aia; if (responder->len == 0) { @@ -403,6 +404,8 @@ ngx_ssl_stapling_responder(ngx_conf_t *c return NGX_DECLINED; } + responder = &rsp; + responder->len = ngx_strlen(s); responder->data = ngx_palloc(cf->pool, responder->len); if (responder->data == NULL) { -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Sep 8 22:26:39 2016 From: nginx-forum at forum.nginx.org (mastercan) Date: Thu, 08 Sep 2016 18:26:39 -0400 Subject: Multi Certificate Support with OCSP not working right In-Reply-To: <20160908211137.GK86582@mdounin.ru> References: <20160908211137.GK86582@mdounin.ru> Message-ID: Hello Maxim, Thank you! Good news: The patch seems to work. br, Can Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269371,269433#msg-269433 From emailgrant at gmail.com Thu Sep 8 22:31:58 2016 From: emailgrant at gmail.com (Grant) Date: Thu, 8 Sep 2016 15:31:58 -0700 Subject: limit-req: better message for users? Message-ID: Has anyone experimented with displaying a more informative message than "503 Service Temporarily Unavailable" when someone exceeds the limit-req? - Grant From emailgrant at gmail.com Fri Sep 9 01:23:43 2016 From: emailgrant at gmail.com (Grant) Date: Thu, 8 Sep 2016 18:23:43 -0700 Subject: limit-req and greedy UAs Message-ID: Has anyone considered the problem of legitimate UAs which request a series of files which don't necessarily exist when they access your site? Requests for files like robots.txt, sitemap.xml, crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA to exceed the limit-req burst value. What is the right way to deal with this? - Grant From lists at lazygranch.com Fri Sep 9 01:39:40 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 08 Sep 2016 18:39:40 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: Message-ID: <20160909013940.5501012.10243.10085@lazygranch.com> ?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something? http://nginx.org/en/docs/http/ngx_http_limit_req_module.html ? Original Message ? From: Grant Sent: Thursday, September 8, 2016 6:24 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: limit-req and greedy UAs Has anyone considered the problem of legitimate UAs which request a series of files which don't necessarily exist when they access your site? Requests for files like robots.txt, sitemap.xml, crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA to exceed the limit-req burst value. What is the right way to deal with this? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 9 07:01:39 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 09 Sep 2016 03:01:39 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires Message-ID: So i read that IE8 and older browsers do not support "Max-Age" inside of set-cookie headers. (but all browsers and modern support expires) add_header Set-Cookie "value=1;Domain=.networkflare.com;Path=/;Max-Age=2592000"; #+1 month 30 days Apprently they support "expires" though so i changed the above to the following but now the cookie says it will expire at end of every session. add_header Set-Cookie "value=1;Domain=.networkflare.com;Path=/;expires=2592000"; #+1 month 30 days how can i tell this to expire 1 month into the future all the examples i find mean i have to set a date manually what would mean restarting and editing my config constantly. (automated would be nice) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269438#msg-269438 From nginx-forum at forum.nginx.org Fri Sep 9 07:41:12 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 09 Sep 2016 03:41:12 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: Message-ID: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> In Lua it's as easy as: https://github.com/openresty/lua-nginx-module/issues/19#issuecomment-19966018 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269439#msg-269439 From sca at andreasschulze.de Fri Sep 9 08:56:59 2016 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 09 Sep 2016 10:56:59 +0200 Subject: limit-req: better message for users? In-Reply-To: Message-ID: <20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de> Grant: > Has anyone experimented with displaying a more informative message > than "503 Service Temporarily Unavailable" when someone exceeds the > limit-req? maybe https://tools.ietf.org/html/rfc6585#section-4 ? Andreas From nginx-forum at forum.nginx.org Fri Sep 9 08:57:25 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 09 Sep 2016 04:57:25 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> Message-ID: if ($host ~* www(.*)) { set $host_without_www $1; } header_filter_by_lua ' ngx.header["Set-Cookie"] = "value=1; path=/; domain=$host_without_www; Expires=" .. ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days '; So i added this to my config but does not work for me :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269441#msg-269441 From nginx-forum at forum.nginx.org Fri Sep 9 09:38:21 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 09 Sep 2016 05:38:21 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Solved it now i forgot in lua i declare vars from nginx different. header_filter_by_lua ' ngx.header["Set-Cookie"] = "value=1; path=/; domain=" .. ngx.var.host_without_www .. "; Expires=" .. ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days '; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269443#msg-269443 From nginx-forum at forum.nginx.org Fri Sep 9 11:33:24 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 09 Sep 2016 07:33:24 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Good, keep in mind that "ngx.time()" can be expensive, it would be advisable to use a global var to store time and update this var once every hour. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269444#msg-269444 From nginx-forum at forum.nginx.org Fri Sep 9 12:03:43 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 09 Sep 2016 08:03:43 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can you provide a example also I seem to have a new issue with my code above it is overwriting all my other set-cookie headers how can i have it set that cookie but not overwrite / remove the others it seems to be a unwanted / unexpected side effect. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269445#msg-269445 From r1ch+nginx at teamliquid.net Fri Sep 9 13:00:36 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 9 Sep 2016 15:00:36 +0200 Subject: limit-req and greedy UAs In-Reply-To: <20160909013940.5501012.10243.10085@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> Message-ID: You can put limit_req in a location, for example do not limit static files and only limit expensive backend hits, or use two different thresholds. On Fri, Sep 9, 2016 at 3:39 AM, wrote: > ?Since this limit is per IP, is the scenario you stated really a problem? > Only that IP is effected. Or as is often the case, did I miss something? > > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html > > Original Message > From: Grant > Sent: Thursday, September 8, 2016 6:24 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: limit-req and greedy UAs > > Has anyone considered the problem of legitimate UAs which request a > series of files which don't necessarily exist when they access your > site? Requests for files like robots.txt, sitemap.xml, > crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA > to exceed the limit-req burst value. What is the right way to deal > with this? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Fri Sep 9 16:07:51 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 9 Sep 2016 11:07:51 -0500 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <032C3F6D-8272-40C0-975B-0BCB441268EF@fearnothingproductions.net> Actually no, ngx.time() is not expensive, it uses the cached value stored in the request so it doesn't need to make a syscall. > On Sep 9, 2016, at 06:33, itpp2012 wrote: > > Good, keep in mind that "ngx.time()" can be expensive, it would be advisable > to use a global var to store time and update this var once every hour. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269444#msg-269444 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Fri Sep 9 16:30:36 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 09 Sep 2016 09:30:36 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> Message-ID: <20160909163036.5501012.8924.10125@lazygranch.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 10 12:46:51 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sat, 10 Sep 2016 08:46:51 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Just fixed my problem completely now :) For anyone who also uses Lua and wants to overcome this cross browser compatibility issue with expires and max-age cookie vars. if ($host ~* www(.*)) { set $host_without_www $1; } set_by_lua $expires_time 'return ngx.cookie_time(ngx.time()+2592000)'; add_header Set-Cookie "value=1;domain=$host_without_www;path=/;expires=$expires_time;Max-Age=2592000"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269452#msg-269452 From reallfqq-nginx at yahoo.fr Sat Sep 10 13:54:50 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 10 Sep 2016 15:54:50 +0200 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: <15b14a79dcbea0734d6f400150f60775.NginxMailingListEnglish@forum.nginx.org> <9a1b3e211ecbf4102e580f0941d7efac.NginxMailingListEnglish@forum.nginx.org> Message-ID: I just hope that code won't be used by the owner of wwwooowww.wtf for example. --- *B. R.* On Sat, Sep 10, 2016 at 2:46 PM, c0nw0nk wrote: > Just fixed my problem completely now :) > > For anyone who also uses Lua and wants to overcome this cross browser > compatibility issue with expires and max-age cookie vars. > > if ($host ~* www(.*)) { > set $host_without_www $1; > } > set_by_lua $expires_time 'return ngx.cookie_time(ngx.time()+2592000)'; > add_header Set-Cookie > "value=1;domain=$host_without_www;path=/;expires=$expires_ > time;Max-Age=2592000"; > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269438,269452#msg-269452 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 10 14:39:44 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sat, 10 Sep 2016 10:39:44 -0400 Subject: add_header Set-Cookie The difference between Max-Age and Expires In-Reply-To: References: Message-ID: <014b5549498a46b63d81b880eb1054f6.NginxMailingListEnglish@forum.nginx.org> I am sure (well would hope) they would have the common sense to edit it to their own needs. B.R. Wrote: ------------------------------------------------------- > I just hope that code won't be used by the owner of wwwooowww.wtf for > example. > --- > *B. R.* > > On Sat, Sep 10, 2016 at 2:46 PM, c0nw0nk > wrote: > > > Just fixed my problem completely now :) > > > > For anyone who also uses Lua and wants to overcome this cross > browser > > compatibility issue with expires and max-age cookie vars. > > > > if ($host ~* www(.*)) { > > set $host_without_www $1; > > } > > set_by_lua $expires_time 'return > ngx.cookie_time(ngx.time()+2592000)'; > > add_header Set-Cookie > > "value=1;domain=$host_without_www;path=/;expires=$expires_ > > time;Max-Age=2592000"; > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,269438,269452#msg-269452 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269438,269456#msg-269456 From nginx-forum at forum.nginx.org Sun Sep 11 10:56:17 2016 From: nginx-forum at forum.nginx.org (jchannon) Date: Sun, 11 Sep 2016 06:56:17 -0400 Subject: nginx not returning updated headers from origin server on conditional GET Message-ID: I have nginx and its cache working as expected apart from one minor issue. When a request is made for the first time it hits the origin server, returns a 200 and nginx caches that response. If I make another request I can see from the X-Cache-Status header that the cache has been hit. When I wait a while knowing the cache will have expired I can see nginx hit my origin server doing a conditional GET because I have proxy_cache_revalidate on; defined. When I check if the resource has changed in my app on the origin server I see it hasn't and return a 304 with a new Expires header. Some may argue why are you returning a new Expires header if the origin server says nothing has changed and you are returning 304. The answer is, the HTTP RFC says that this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4 One thing I have noticed, no matter what headers are I add or modify, when the origin server returns 304 nginx will give a response with the first set of response headers it saw for that resource. Also if I change the Cache-Control:max-age header value from the first request when I return the 304 response it appears nginx obeys the new value as my resource is cached for that time however the response header value is that of what was given on the first request not the value that I modified on the 304 response. This applies to all subsequent requests if the origin server issues a 304. I am running nginx version: nginx/1.10.1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269457,269457#msg-269457 From nginx-forum at forum.nginx.org Sun Sep 11 12:12:00 2016 From: nginx-forum at forum.nginx.org (khav) Date: Sun, 11 Sep 2016 08:12:00 -0400 Subject: Rewrite rules not working Message-ID: I am trying to make pretty urls using rewrite rules but they are not working 1. https://example.com/s1/video.mp4 should be rewrite to https://example.com/file/server/video.mp4 location = /s1/(.*)$ { rewrite ^/s1/(.*) /file/server/$1 permanent; } 2. https://example.com/view/video5353 should be rewrite to https://example.com/view.php?id=video5353 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269458,269458#msg-269458 From emailgrant at gmail.com Sun Sep 11 12:29:56 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 05:29:56 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160909163036.5501012.8924.10125@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> Message-ID: > What looks to me to be a real resource hog that quite frankly you cant do much about are download managers. They open up multiple connections, but the rate limits apply to each individual connection. (this is why you want to limit the number of connections.) Does this mean an attacker (for example) could get around rate limits by opening a new connection for each request? How are the number of connections limited? - Grant From emailgrant at gmail.com Sun Sep 11 12:36:24 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 05:36:24 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160909013940.5501012.10243.10085@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> Message-ID: > ?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something? The idea (which I used bad examples to illustrate) is that some mainstream browsers make a series of requests for files which don't necessarily exist. Too many of those requests triggers limiting even though the user didn't do anything wrong. - Grant > Has anyone considered the problem of legitimate UAs which request a > series of files which don't necessarily exist when they access your > site? Requests for files like robots.txt, sitemap.xml, > crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA > to exceed the limit-req burst value. What is the right way to deal > with this? > > - Grant From francis at daoine.org Sun Sep 11 13:44:36 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 11 Sep 2016 14:44:36 +0100 Subject: Rewrite rules not working In-Reply-To: References: Message-ID: <20160911134436.GD11677@daoine.org> On Sun, Sep 11, 2016 at 08:12:00AM -0400, khav wrote: Hi there, > I am trying to make pretty urls using rewrite rules but they are not > working "Pretty urls" usually means that the browser *only* sees the original url, and the internal mangling remains hidden. A rewrite that leads to a HTTP redirect gets the browser to change the url that it shows. Sometimes that is wanted; you can judge that for yourself. > https://example.com/s1/video.mp4 should be rewrite to > https://example.com/file/server/video.mp4 > > location = /s1/(.*)$ { http://nginx.org/r/location. You have used "=", but your pattern resembles a regex. This location as-is will probably not be matched by any request. > rewrite ^/s1/(.*) /file/server/$1 permanent; http://nginx.org/r/rewrite. "permanent" there means "issue a HTTP redirect", so the browser will make a new request for /file/server/video.mp4. I suggest changing it to location ^~ /s1/ { rewrite ^/s1/(.*) /file/server/$1 permanent; } You can remove the "permanent" if you do not want the external redirect to be issued; either way, you will also need a location{} which handles the request for /file/server/video.mp4 and does the right thing. > https://example.com/view/video5353 should be rewrite to > https://example.com/view.php?id=video5353 With a few caveats about edge cases, something like location ^~ /view/ { rewrite ^/view/(.*) /view.php?id=$1 permanent; } should probably do what you want. Similarly, you will need a location{} to handle the /view.php request and do the right thing; and removing "permanent" may be useful. If you do remove "permanent", then you probably could avoid the rewrite altogether and just "fastcgi_pass" directly, with a hardcoded SCRIPT_FILENAME and a manually-defined QUERY_STRING. Good luck with it, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Sun Sep 11 14:30:38 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 11 Sep 2016 07:30:38 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> Message-ID: <20160911143038.5484628.23383.10220@lazygranch.com> I suspect you are referring to the countless variations on the favicon, with Apple being the worst offender since they have many "touch" files. Android has them too. Just make the files. They don't have to be works of art.? http://iconifier.net/ One of many generators. Clearly Apple has no respect for the webmaster. But Microsoft has gone one step beyond that, requiring some sort of XML file.? ?https://msdn.microsoft.com/en-us/library/dn320426(v=vs.85).aspx The good news is you don't get many requests for that XML.? There are many schemes to keep these files out of your logs. https://github.com/h5bp/server-configs/issues/132 I look at my logs with scripts, so I haven't bothered to do this, but it is probably good advice. Are there other files browsers request? ? Original Message ? From: Grant Sent: Sunday, September 11, 2016 5:36 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs > ?Since this limit is per IP, is the scenario you stated really a problem? Only that IP is effected. Or as is often the case, did I miss something? The idea (which I used bad examples to illustrate) is that some mainstream browsers make a series of requests for files which don't necessarily exist. Too many of those requests triggers limiting even though the user didn't do anything wrong. - Grant > Has anyone considered the problem of legitimate UAs which request a > series of files which don't necessarily exist when they access your > site? Requests for files like robots.txt, sitemap.xml, > crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA > to exceed the limit-req burst value. What is the right way to deal > with this? > > - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Sun Sep 11 15:21:41 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 11 Sep 2016 08:21:41 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> Message-ID: <20160911152141.5484628.98176.10223@lazygranch.com> ?This page has all the secret sauce, including how to limit the number of connections.? https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ I set up the firewall with a higher number as a "just in case." Also note if you do streaming outside nginx, then you have to limit connections for that service in the program providing it.? Mind you while I think this page has good advice, what is listed here won't stop a real ddos attack. The first D is for distributed, meaning the attack come from many IP addresses. You probably have to pay for one of those reverse proxy services to avoid a real ddos, but I personally find them them a bit creepy since I have seen hacking attempts come from behind them.? The tips on this nginx page will limit the teenage boy in his parents basement, which is a more real life scenario to be attacked. But note that every photo you load is a request, so I wouldn't make the limit ?any lower than 5 to10 per second. You can play with the limits and watch the results on your own system. Just remember to:? service nginx reload service nginx restart If you do fancy caching, you may have to clear your browser cache. In theory, Google page ranking takes speed into account. ?There are many websites that will evaluate your nginx set up.? https://www.webpagetest.org/ One thing to remember is nginx limits are in bytes per second, not bits per second. So the 512k limit in this example is really quite generous. ?http://www.webhostingtalk.com/showthread.php?t=1433413 There are programs you can run on your server to flog nginx. https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench I did this with htperf, but sysbench is supposed to be better. Nginx is very efficient. Your limiting factor will probably be your server network connection. If you sftp files from your server, it will be at the maximum rate you can deliver, and this depends on time of day since you are sharing the pipe. I'm using a VPS that does 40mbps on a good day. Figure 10 users at a time and the 512kbyes per second put me at the limit.? If you use the nginx map module, you can block download managers if they are honest with their user agents.? http://nginx.org/en/docs/http/ngx_http_map_module.html http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html Beware of creating false positives with such rules. When developing code, I return a 444 then search the access.log for what it found, just to insure I wrote the rule correctly. ? Original Message ? From: Grant Sent: Sunday, September 11, 2016 5:30 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs > What looks to me to be a real resource hog that quite frankly you cant do much about are download managers. They open up multiple connections, but the rate limits apply to each individual connection. (this is why you want to limit the number of connections.) Does this mean an attacker (for example) could get around rate limits by opening a new connection for each request? How are the number of connections limited? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From emailgrant at gmail.com Sun Sep 11 17:22:24 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 10:22:24 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160911143038.5484628.23383.10220@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160911143038.5484628.23383.10220@lazygranch.com> Message-ID: > I suspect you are referring to the countless variations on the favicon, with Apple being the worst offender since they have many "touch" files. Android has them too. Just make the files. I disagree but maybe because of my webmastering style. I don't know what more of these files will show up in the future and I want to be as hands-off as possible to save time. > Clearly Apple has no respect for the webmaster. But Microsoft has gone one step beyond that, requiring some sort of XML file. > > There are many schemes to keep these files out of your logs. > https://github.com/h5bp/server-configs/issues/132 > I look at my logs with scripts, so I haven't bothered to do this, but it is probably good advice. I don't think not logging those requests is a good idea unless you need the disk space. > Are there other files browsers request? Today: I don't know. Tomorrow: nobody knows. - Grant From emailgrant at gmail.com Sun Sep 11 17:28:19 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 10:28:19 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160911152141.5484628.98176.10223@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> Message-ID: > ?This page has all the secret sauce, including how to limit the number of connections. > > https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ > > I set up the firewall with a higher number as a "just in case." Should I basically duplicate my limit_req and limit_req_zone directives into limit_conn and limit_conn_zone? In what sort of situation would someone not do that? - Grant From emailgrant at gmail.com Sun Sep 11 17:34:00 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 10:34:00 -0700 Subject: limit-req: better message for users? In-Reply-To: <20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de> References: <20160909105659.Horde.kUSojwGhHFfiC3RJxg_fpOR@andreasschulze.de> Message-ID: >> Has anyone experimented with displaying a more informative message >> than "503 Service Temporarily Unavailable" when someone exceeds the >> limit-req? > > > maybe https://tools.ietf.org/html/rfc6585#section-4 ? That's awesome. Any idea why it isn't the default? Do you remember the directive that will set this and roughly where it should go? - Grant From emailgrant at gmail.com Sun Sep 11 18:03:47 2016 From: emailgrant at gmail.com (Grant) Date: Sun, 11 Sep 2016 11:03:47 -0700 Subject: Back button causes limiting? Message-ID: I just saw some strange stuff in my logs and it only makes sense if pressing the back button creates a new request on an iPad. So if an iPad user presses the back button 5 times quickly, they will have generated 5 requests in a very short period of time which could turn on rate limiting if so configured. Has anyone else noticed this? - Grant From lists at lazygranch.com Sun Sep 11 19:16:06 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 11 Sep 2016 12:16:06 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> Message-ID: <20160911191606.5484628.46851.10233@lazygranch.com> ?https://www.nginx.com/blog/tuning-nginx/ ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.? My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.? I don't believe their is one answer here because it depends on how the user interacts with the website. I only serve static content. In fact, I only allow the verbs "head" and "get" to limit the attack surface. A page of text and photos itself can be many things. Think of a photo gallery versus a forum page. The forum has mostly text sprinkled with avatar photos, while the gallery can be mostly images with just a line of text each.? Basically you need to experiment. Even then, your setup may be better or worse than the typical user. That said, if you limited the rate to 512k bytes per second, most users could achieve that rate?.? I just don't see evidence of download managers. I see plenty of wget, curl, and python. Those people get my 444 treatment. I use the map module as indicated in my other post to do this.? What I haven't mentioned is filtering out machines. If you are really concerned about your system being overloaded, think about the search engines you want to support. Assuming you want Google, you need to set up your website in a manner so that Google knows you own it, then you can throttle it back. Google is maybe 20% of my referrals. If you have a lot of photos, you can set up nginx to block hit linking. This is significant because Google images will hot link everything you have. What you want is for Google itself to see your images, which it will present in reduced resolution, but block the Google hot link. If someone really wants to see your image, Google supplies the referal page.? http://nginx.org/en/docs/http/ngx_http_referer_module.html I make my own domain a valid, but maybe that is assumed. If you want to place a link to an image on your website in a forum, you need to make that forum valid.? Facebook will steal your images. http://badbots.vps.tips/info/facebookexternalhit-bot I would use the nginx map module since you will probably be blocking many bots.? Finally, you may want to block "the cloud"? using your firewall. Only block the browser ports since mail servers will be on the cloud. I block all of AWS for example. My nginx.conf also flags certain requests such as logging into WordPress since I'm not using WordPress! Clearly that IP is a hacker. I have plenty more signatures in the map. I have a script that pulls the IP addresses out of the access.log. I get maybe 20 addresses a day. I feed them to ip2location. Any address that goes to a cloud, VPS, colo, hosting company gets added to the firewall blocking list. I don't just block the IP, but I use the Hurricane Electric BGP tool to get the entire IP space to block. As a rule, I don't block schools, libraries, or ISPs. The idea here is to allow eyeballs but not machines.? You can also use commercial blocking services if you trust them. (I don't. ) ? Original Message ? From: Grant Sent: Sunday, September 11, 2016 10:28 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs > ?This page has all the secret sauce, including how to limit the number of connections. > > https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ > > I set up the firewall with a higher number as a "just in case." Should I basically duplicate my limit_req and limit_req_zone directives into limit_conn and limit_conn_zone? In what sort of situation would someone not do that? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Mon Sep 12 08:07:31 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 12 Sep 2016 10:07:31 +0200 Subject: limit-req and greedy UAs In-Reply-To: <20160911191606.5484628.46851.10233@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: You could also generate 304 responses for content you won't provide (cf. return). nginx is good at dealing with loads of requests, no problem on that side. And since return generates an in-memory answer by default, you won't be hammering your resources. If yo uare CPU or RAM-limited because of those requests, then I would suggest you evaluate the sizing of your server(s). You might wish to seperate logging for these requests from the standard flow to improve their readability, or deactivate them altogether if you consider they add little-to-no value. My 2?, --- *B. R.* On Sun, Sep 11, 2016 at 9:16 PM, wrote: > ?https://www.nginx.com/blog/tuning-nginx/ > > ?I have far more faith in this write up regarding tuning than the > anti-ddos, though both have similarities. > > My interpretation is the user bandwidth is connections times rate. But you > can't limit the connection to one because (again my interpretation) there > can be multiple users behind one IP. Think of a university reading your > website. Thus I am more comfortable limiting bandwidth than I am limiting > the number of connections. ?The 512k rate limit is fine. I wouldn't go any > higher. > > I don't believe their is one answer here because it depends on how the > user interacts with the website. I only serve static content. In fact, I > only allow the verbs "head" and "get" to limit the attack surface. A page > of text and photos itself can be many things. Think of a photo gallery > versus a forum page. The forum has mostly text sprinkled with avatar > photos, while the gallery can be mostly images with just a line of text > each. > > Basically you need to experiment. Even then, your setup may be better or > worse than the typical user. That said, if you limited the rate to 512k > bytes per second, most users could achieve that rate?. > > I just don't see evidence of download managers. I see plenty of wget, > curl, and python. Those people get my 444 treatment. I use the map module > as indicated in my other post to do this. > > What I haven't mentioned is filtering out machines. If you are really > concerned about your system being overloaded, think about the search > engines you want to support. Assuming you want Google, you need to set up > your website in a manner so that Google knows you own it, then you can > throttle it back. Google is maybe 20% of my referrals. > > If you have a lot of photos, you can set up nginx to block hit linking. > This is significant because Google images will hot link everything you > have. What you want is for Google itself to see your images, which it will > present in reduced resolution, but block the Google hot link. If someone > really wants to see your image, Google supplies the referal page. > > http://nginx.org/en/docs/http/ngx_http_referer_module.html > > I make my own domain a valid, but maybe that is assumed. If you want to > place a link to an image on your website in a forum, you need to make that > forum valid. > > Facebook will steal your images. > http://badbots.vps.tips/info/facebookexternalhit-bot > > I would use the nginx map module since you will probably be blocking many > bots. > > Finally, you may want to block "the cloud"? using your firewall. Only > block the browser ports since mail servers will be on the cloud. I block > all of AWS for example. My nginx.conf also flags certain requests such as > logging into WordPress since I'm not using WordPress! Clearly that IP is a > hacker. I have plenty more signatures in the map. I have a script that > pulls the IP addresses out of the access.log. I get maybe 20 addresses a > day. I feed them to ip2location. Any address that goes to a cloud, VPS, > colo, hosting company gets added to the firewall blocking list. I don't > just block the IP, but I use the Hurricane Electric BGP tool to get the > entire IP space to block. As a rule, I don't block schools, libraries, or > ISPs. The idea here is to allow eyeballs but not machines. > > You can also use commercial blocking services if you trust them. (I don't. > ) > > > Original Message > From: Grant > Sent: Sunday, September 11, 2016 10:28 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: limit-req and greedy UAs > > > ?This page has all the secret sauce, including how to limit the number > of connections. > > > > https://www.nginx.com/blog/mitigating-ddos-attacks-with- > nginx-and-nginx-plus/ > > > > I set up the firewall with a higher number as a "just in case." > > > Should I basically duplicate my limit_req and limit_req_zone > directives into limit_conn and limit_conn_zone? In what sort of > situation would someone not do that? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Sep 12 08:16:37 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 12 Sep 2016 10:16:37 +0200 Subject: nginx not returning updated headers from origin server on conditional GET In-Reply-To: References: Message-ID: >From what I understand, 304 answers should not try to modify headers, as the cache having made the conditional request to check the correctness of its entry will not necessarily update it: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5. The last sentence sums it all: '*If* a cache uses a received 304 response to update a cache entry, [...]' --- *B. R.* On Sun, Sep 11, 2016 at 12:56 PM, jchannon wrote: > I have nginx and its cache working as expected apart from one minor issue. > When a request is made for the first time it hits the origin server, > returns > a 200 and nginx caches that response. If I make another request I can see > from the X-Cache-Status header that the cache has been hit. When I wait a > while knowing the cache will have expired I can see nginx hit my origin > server doing a conditional GET because I have proxy_cache_revalidate on; > defined. > > When I check if the resource has changed in my app on the origin server I > see it hasn't and return a 304 with a new Expires header. Some may argue > why > are you returning a new Expires header if the origin server says nothing > has > changed and you are returning 304. The answer is, the HTTP RFC says that > this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4 > > One thing I have noticed, no matter what headers are I add or modify, when > the origin server returns 304 nginx will give a response with the first set > of response headers it saw for that resource. > > Also if I change the Cache-Control:max-age header value from the first > request when I return the 304 response it appears nginx obeys the new value > as my resource is cached for that time however the response header value is > that of what was given on the first request not the value that I modified > on > the 304 response. This applies to all subsequent requests if the origin > server issues a 304. > > I am running nginx version: nginx/1.10.1 > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269457,269457#msg-269457 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Sep 12 09:26:29 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 12 Sep 2016 02:26:29 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: <20160912092629.5484629.32659.10252@lazygranch.com> ?I picked 444 based on the following, though I see your point in that it is a non-standard code. I guess from a multiplier standpoint, returning nothing is as minimal as it gets, but the hacker often sends the message twice due to lack of response. A 304 return to an attempt to log into WordPress would seem a bit weird. All I really need is a unique code to find in the log file.? 444?CONNECTION CLOSED WITHOUT RESPONSE A non-standard status code used to instruct?nginx to close the connection without sending a response to the client, most commonly used to deny malicious or malformed requests. ? This status code is not seen by the client, it only appears in nginx log files.? ? Original Message ? From: B.R. Sent: Monday, September 12, 2016 1:08 AM To: nginx ML Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs You could also generate 304 responses for content you won't provide (cf. return). nginx is good at dealing with loads of requests, no problem on that side. And since return generates an in-memory answer by default, you won't be hammering your resources. If yo uare CPU or RAM-limited because of those requests, then I would suggest you evaluate the sizing of your server(s). You might wish to seperate logging for these requests from the standard flow to improve their readability, or deactivate them altogether if you consider they add little-to-no value. My 2?, --- B. R. On Sun, Sep 11, 2016 at 9:16 PM, wrote: ?https://www.nginx.com/blog/tuning-nginx/ ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities.? My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher.? I don't believe their is one answer here because it depends on how the user interacts with the website. I only serve static content. In fact, I only allow the verbs "head" and "get" to limit the attack surface. A page of text and photos itself can be many things. Think of a photo gallery versus a forum page. The forum has mostly text sprinkled with avatar photos, while the gallery can be mostly images with just a line of text each.? Basically you need to experiment. Even then, your setup may be better or worse than the typical user. That said, if you limited the rate to 512k bytes per second, most users could achieve that rate?.? I just don't see evidence of download managers. I see plenty of wget, curl, and python. Those people get my 444 treatment. I use the map module as indicated in my other post to do this.? What I haven't mentioned is filtering out machines. If you are really concerned about your system being overloaded, think about the search engines you want to support. Assuming you want Google, you need to set up your website in a manner so that Google knows you own it, then you can throttle it back. Google is maybe 20% of my referrals. If you have a lot of photos, you can set up nginx to block hit linking. This is significant because Google images will hot link everything you have. What you want is for Google itself to see your images, which it will present in reduced resolution, but block the Google hot link. If someone really wants to see your image, Google supplies the referal page.? http://nginx.org/en/docs/http/ngx_http_referer_module.html I make my own domain a valid, but maybe that is assumed. If you want to place a link to an image on your website in a forum, you need to make that forum valid.? Facebook will steal your images. http://badbots.vps.tips/info/facebookexternalhit-bot I would use the nginx map module since you will probably be blocking many bots.? Finally, you may want to block "the cloud"? using your firewall. Only block the browser ports since mail servers will be on the cloud. I block all of AWS for example. My nginx.conf also flags certain requests such as logging into WordPress since I'm not using WordPress! Clearly that IP is a hacker. I have plenty more signatures in the map. I have a script that pulls the IP addresses out of the access.log. I get maybe 20 addresses a day. I feed them to ip2location. Any address that goes to a cloud, VPS, colo, hosting company gets added to the firewall blocking list. I don't just block the IP, but I use the Hurricane Electric BGP tool to get the entire IP space to block. As a rule, I don't block schools, libraries, or ISPs. The idea here is to allow eyeballs but not machines.? You can also use commercial blocking services if you trust them. (I don't. ) ? Original Message ? From: Grant Sent: Sunday, September 11, 2016 10:28 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs > ?This page has all the secret sauce, including how to limit the number of connections. > > https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ > > I set up the firewall with a higher number as a "just in case." Should I basically duplicate my limit_req and limit_req_zone directives into limit_conn and limit_conn_zone? In what sort of situation would someone not do that? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Sep 12 12:51:54 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 12 Sep 2016 08:51:54 -0400 Subject: limit-req and greedy UAs In-Reply-To: <20160911152141.5484628.98176.10223@lazygranch.com> References: <20160911152141.5484628.98176.10223@lazygranch.com> Message-ID: <4159f4fa5336483595c0193bbf5d3b95.NginxMailingListEnglish@forum.nginx.org> gariac Wrote: ------------------------------------------------------- > ?This page has all the secret sauce, including how to limit the number > of connections.? > > https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-ngin > x-plus/ > > I set up the firewall with a higher number as a "just in case." Also > note if you do streaming outside nginx, then you have to limit > connections for that service in the program providing it.? > > Mind you while I think this page has good advice, what is listed here > won't stop a real ddos attack. The first D is for distributed, meaning > the attack come from many IP addresses. You probably have to pay for > one of those reverse proxy services to avoid a real ddos, but I > personally find them them a bit creepy since I have seen hacking > attempts come from behind them.? > > The tips on this nginx page will limit the teenage boy in his parents > basement, which is a more real life scenario to be attacked. But note > that every photo you load is a request, so I wouldn't make the limit > ?any lower than 5 to10 per second. You can play with the limits and > watch the results on your own system. Just remember to:? > service nginx reload > service nginx restart > > If you do fancy caching, you may have to clear your browser cache. > > In theory, Google page ranking takes speed into account. ?There are > many websites that will evaluate your nginx set up.? > https://www.webpagetest.org/ > > One thing to remember is nginx limits are in bytes per second, not > bits per second. So the 512k limit in this example is really quite > generous. > ?http://www.webhostingtalk.com/showthread.php?t=1433413 > > There are programs you can run on your server to flog nginx. > https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-my > sql-with-sysbench > > I did this with htperf, but sysbench is supposed to be better. Nginx > is very efficient. Your limiting factor will probably be your server > network connection. If you sftp files from your server, it will be at > the maximum rate you can deliver, and this depends on time of day > since you are sharing the pipe. I'm using a VPS that does 40mbps on a > good day. Figure 10 users at a time and the 512kbyes per second put me > at the limit.? > > If you use the nginx map module, you can block download managers if > they are honest with their user agents.? > > http://nginx.org/en/docs/http/ngx_http_map_module.html > http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.htm > l > > Beware of creating false positives with such rules. When developing > code, I return a 444 then search the access.log for what it found, > just to insure I wrote the rule correctly. > > > > > > > ? Original Message ? > From: Grant > Sent: Sunday, September 11, 2016 5:30 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: limit-req and greedy UAs > > > What looks to me to be a real resource hog that quite frankly you > cant do much about are download managers. They open up multiple > connections, but the rate limits apply to each individual connection. > (this is why you want to limit the number of connections.) > > > Does this mean an attacker (for example) could get around rate limits > by opening a new connection for each request? How are the number of > connections limited? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx The following is a good resource also if you are having issues with slow DOS attacks where they are trying to keep connections open for long periods of time. OWASP : https://www.owasp.org/index.php/SCG_WS_nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269435,269473#msg-269473 From mdounin at mdounin.ru Mon Sep 12 14:57:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Sep 2016 17:57:43 +0300 Subject: nginx not returning updated headers from origin server on conditional GET In-Reply-To: References: Message-ID: <20160912145743.GA1527@mdounin.ru> Hello! On Sun, Sep 11, 2016 at 06:56:17AM -0400, jchannon wrote: > I have nginx and its cache working as expected apart from one minor issue. > When a request is made for the first time it hits the origin server, returns > a 200 and nginx caches that response. If I make another request I can see > from the X-Cache-Status header that the cache has been hit. When I wait a > while knowing the cache will have expired I can see nginx hit my origin > server doing a conditional GET because I have proxy_cache_revalidate on; > defined. > > When I check if the resource has changed in my app on the origin server I > see it hasn't and return a 304 with a new Expires header. Some may argue why > are you returning a new Expires header if the origin server says nothing has > changed and you are returning 304. The answer is, the HTTP RFC says that > this can be done https://tools.ietf.org/html/rfc7234#section-4.3.4 > > One thing I have noticed, no matter what headers are I add or modify, when > the origin server returns 304 nginx will give a response with the first set > of response headers it saw for that resource. Conditional revalidation as available with "proxy_cache_revalidate on" doesn't try to merge any new/updated headers to the response stored. This is by design - merging and updating headers will be just too costly. This is normally not an issue as you can (and should) use "Cache-Control: max-age=..." instead of Expires, and with max-age you don't need to update anything in the response. If you can't afford this behaviour for some reason, the only solution is to avoid using proxy_cache_revalidate. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Mon Sep 12 17:17:06 2016 From: emailgrant at gmail.com (Grant) Date: Mon, 12 Sep 2016 10:17:06 -0700 Subject: Don't process requests containing folders Message-ID: My site doesn't have any folders in its URL structure so I'd like to have nginx process any request which includes a folder (cheap 404) instead of sending the request to my backend (expensive 404). Currently I'm using a series of location blocks to check for a valid request. Here's the last one before nginx internal takes over: location ~ (^/|.html)$ { } Can I expand that to only match requests with a single / or ending in .html like this: location ~ (^[^/]+/?[^/]+$|.html$) { } Should that work as expected? - Grant From jschaeffer0922 at gmail.com Mon Sep 12 19:04:28 2016 From: jschaeffer0922 at gmail.com (Joshua Schaeffer) Date: Mon, 12 Sep 2016 13:04:28 -0600 Subject: Connecting Nginx to LDAP/Kerberos Message-ID: Greetings Nginx list, I've setup git-http-backend on a sandbox nginx server to host my git projects inside my network. I'm trying to get everything setup so that I can require auth to that server block using SSO, which I have setup and working with LDAP and Kerberos. I have all my accounts in Kerberos which is stored in OpenLDAP and authentication works via GSSAPI. How do I get my git server block to use my central authentication? Does anybody have any experience in setting this up? I've found a couple git projects which enhances Nginx to support LDAP authentication: - https://github.com/kvspb/nginx-auth-ldap - https://github.com/nginxinc/nginx-ldap-auth I've gone through the reference implementation (nginx-ldap-auth), but found that this won't work for me as I use GSSAPI for my authentication. Looking to see if anybody has done something like this and what their experience was. Let me know if you'd like to see any of my nginx configuration files. Thanks, Joshua Schaeffer -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Mon Sep 12 19:22:03 2016 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 12 Sep 2016 21:22:03 +0200 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: References: Message-ID: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> Am 12.09.2016 um 21:04 schrieb Joshua Schaeffer: > - https://github.com/kvspb/nginx-auth-ldap I'm using that one to authenticate my users. auth_ldap_cache_enabled on; ldap_server my_ldap_server { url ldaps://ldap.example.org/dc=users,dc=mybase?uid?sub; binddn cn=nginx,dc=mybase; binddn_passwd ...; require valid_user; } server { ... location / { auth_ldap "foobar"; auth_ldap_servers "my_ldap_server"; root /srv/www/...; } } this is like documented on https://github.com/kvspb/nginx-auth-ldap exept my auth_ldap statements are inside the location. while docs suggest them outside. Q: does that matter? I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot remember the detailed reasons. Finally: it's working as expected for me (basic auth, no Kerberos) BUT: I fail to compile this module with openssl-1.1.0 I send a message to https://github.com/kvspb some days ago but got no response till now. the problem (nginx-1.11.3 + openssl-1.1.0 + nginx-auth-ldap) cc -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wall -I src/core -I src/event -I src/event/modules -I src/os/unix -I /opt/local/include -I objs -I src/http -I src/http/modules -I src/http/v2 \ -o objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o \ ./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c ./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c: In function 'ngx_http_auth_ldap_ssl_handshake': ./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1325:79: error: dereferencing pointer to incomplete type int setcode = SSL_CTX_load_verify_locations(transport->ssl->connection->ctx, ^ ./nginx-auth-ldap-20160428//ngx_http_auth_ldap_module.c:1335:80: error: dereferencing pointer to incomplete type int setcode = SSL_CTX_set_default_verify_paths(transport->ssl->connection->ctx); ^ make[2]: *** [objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o] Error 1 objs/Makefile:1343: recipe for target 'objs/addon/nginx-auth-ldap-20160428/ngx_http_auth_ldap_module.o' failed Maybe the list have a suggestion... From jschaeffer0922 at gmail.com Mon Sep 12 19:33:04 2016 From: jschaeffer0922 at gmail.com (Joshua Schaeffer) Date: Mon, 12 Sep 2016 13:33:04 -0600 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> References: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> Message-ID: > > >> I'm using that one to authenticate my users. > > auth_ldap_cache_enabled on; > ldap_server my_ldap_server { > url ldaps://ldap.example.org/dc=u > sers,dc=mybase?uid?sub; > binddn cn=nginx,dc=mybase; > binddn_passwd ...; > require valid_user; > } > > server { > ... > location / { > auth_ldap "foobar"; > auth_ldap_servers "my_ldap_server"; > > root /srv/www/...; > } > } > Thanks having a config to compare against is always helpful for me. > > this is like documented on https://github.com/kvspb/nginx-auth-ldap exept > my auth_ldap statements are inside the location. > while docs suggest them outside. > Q: does that matter? > >From my understanding of Nginx, no, since location is lower in the hierarchy it will just override any auth_ldap directives outside of it. > > I found it useful to explicit set "auth_ldap_cache_enabled on" but cannot > remember the detailed reasons. > Finally: it's working as expected for me (basic auth, no Kerberos) > Any chance anybody has played around with Kerberos auth? Currently my SSO environment uses GSSAPI for most authentication. Thanks, Joshua Schaeffer -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Mon Sep 12 19:37:51 2016 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 12 Sep 2016 21:37:51 +0200 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: References: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> Message-ID: Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer: > Any chance anybody has played around with Kerberos auth? Currently my SSO > environment uses GSSAPI for most authentication. I compile also the module https://github.com/stnoonan/spnego-http-auth-nginx-module but I've no time to configure / learn how to configure it ... unfortunately ... Andreas From jschaeffer0922 at gmail.com Mon Sep 12 19:52:16 2016 From: jschaeffer0922 at gmail.com (Joshua Schaeffer) Date: Mon, 12 Sep 2016 13:52:16 -0600 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: References: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> Message-ID: On Mon, Sep 12, 2016 at 1:37 PM, A. Schulze wrote: > > > Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer: > >> Any chance anybody has played around with Kerberos auth? Currently my SSO >> environment uses GSSAPI for most authentication. >> > > I compile also the module https://github.com/stnoonan/sp > nego-http-auth-nginx-module > but I've no time to configure / learn how to configure it > ... unfortunately ... I did actually see this module as well, but didn't look into it too much. Perhaps it would be best for me to take a closer look and then report back on what I find. Thanks, Joshua Schaeffer -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Mon Sep 12 20:23:08 2016 From: emailgrant at gmail.com (Grant) Date: Mon, 12 Sep 2016 13:23:08 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160911191606.5484628.46851.10233@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: > ?https://www.nginx.com/blog/tuning-nginx/ > > ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities. > > My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher. If I understand correctly, limit_req only works if the same connection is used for each request. My goal with limit_conn and limit_conn_zone would be to prevent someone from circumventing limit_req by opening a new connection for each request. Given that, why would my limit_conn/limit_conn_zone config be any different from my limit_req/limit_req_zone config? - Grant > Should I basically duplicate my limit_req and limit_req_zone > directives into limit_conn and limit_conn_zone? In what sort of > situation would someone not do that? > > - Grant From francis at daoine.org Mon Sep 12 20:27:14 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Sep 2016 21:27:14 +0100 Subject: Don't process requests containing folders In-Reply-To: References: Message-ID: <20160912202714.GE11677@daoine.org> On Mon, Sep 12, 2016 at 10:17:06AM -0700, Grant wrote: Hi there, > My site doesn't have any folders in its URL structure so I'd like to > have nginx process any request which includes a folder (cheap 404) > instead of sending the request to my backend (expensive 404). The location-matching rules are at http://nginx.org/r/location At the point of location-matching, nginx does not know anything about folders; it only knows about the incoming request and the defined "location" patterns. That probably sounds like it is being pedantic; but once you know what the rules are, it may be clearer how to configure nginx to do what you want. "doesn't have any folders" might mean "no valid url has a second slash". (Unless you are using something like a fastcgi service which makes use of PATH_INFO.) > Currently I'm using a series of location blocks to check for a valid > request. Here's the last one before nginx internal takes over: > > location ~ (^/|.html)$ { > } I think that says "is exactly /, or ends in html". It might be simpler to understand if you write it as two locations: location = / {} location ~ html$ {} partly because if that is *not* what you want, that should be obvious from the simpler expression. I'm actually not sure whether this is intended to be the "good" request, or the "bad" request. If it is the "bad" one, then "return 404;" can easily be copied in to each. If it is the "good" one, with a complicated config, then you may need to have many duplicate lines in the two locations; or just "include" a file with the good" configuration. > Can I expand that to only match requests with a single / or ending in > .html like this: > > location ~ (^[^/]+/?[^/]+$|.html$) { Since every real request starts with a /, I think that that pattern effectively says "ends in html", which matches fewer requests than the earlier one. > Should that work as expected? Only if you expect it to be the same as "location ~ html$ {}". So: probably "no". If you want to match "requests with a second slash", do just that: location ~ ^/.*/ {} (the "^" is not necessary there, but I guess-without-testing that it helps.) If you want to match "requests without a second slash", you could do location ~ ^/[^/]*$ {} but I suspect you'll be better off with the positive match, plus a "location /" for "all the rest". Good luck with it, f -- Francis Daly francis at daoine.org From emailgrant at gmail.com Mon Sep 12 20:55:35 2016 From: emailgrant at gmail.com (Grant) Date: Mon, 12 Sep 2016 13:55:35 -0700 Subject: Don't process requests containing folders In-Reply-To: <20160912202714.GE11677@daoine.org> References: <20160912202714.GE11677@daoine.org> Message-ID: >> My site doesn't have any folders in its URL structure so I'd like to >> have nginx process any request which includes a folder (cheap 404) >> instead of sending the request to my backend (expensive 404). > >> Currently I'm using a series of location blocks to check for a valid >> request. Here's the last one before nginx internal takes over: >> >> location ~ (^/|.html)$ { >> } > > I think that says "is exactly /, or ends in html". Yes that is my intention. > I'm actually not sure whether this is intended to be the "good" > request, or the "bad" request. If it is the "bad" one, then "return > 404;" can easily be copied in to each. If it is the "good" one, with a > complicated config, then you may need to have many duplicate lines in > the two locations; or just "include" a file with the good" configuration. That's the good request. I do need it in multiple locations but an include is working well for that. >> Can I expand that to only match requests with a single / or ending in >> .html like this: >> >> location ~ (^[^/]+/?[^/]+$|.html$) { > > Since every real request starts with a /, I think that that pattern > effectively says "ends in html", which matches fewer requests than the > earlier one. That is not what I intended. > If you want to match "requests with a second slash", do just that: > > location ~ ^/.*/ {} > > (the "^" is not necessary there, but I guess-without-testing that > it helps.) When you say it helps, you mean for performance? > If you want to match "requests without a second slash", you could do > > location ~ ^/[^/]*$ {} > > but I suspect you'll be better off with the positive match, plus a > "location /" for "all the rest". I want to keep my location blocks to a minimum so I think I should use the following as my last location block which will send all remaining good requests to my backend: location ~ (^/[^/]*|.html)$ {} And let everything else match the following, most of which will 404 (cheaply): location / { internal; } - Grant From r1ch+nginx at teamliquid.net Mon Sep 12 21:39:38 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 12 Sep 2016 23:39:38 +0200 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: limit_req works with multiple connections, it is usually configured per IP using $binary_remote_addr. See http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone - you can use variables to set the key to whatever you like. limit_req generally helps protect eg your backend against request floods from a single IP and any amount of connections. limit_conn protects against excessive connections tying up resources on the webserver itself. On Mon, Sep 12, 2016 at 10:23 PM, Grant wrote: > > ?https://www.nginx.com/blog/tuning-nginx/ > > > > ?I have far more faith in this write up regarding tuning than the > anti-ddos, though both have similarities. > > > > My interpretation is the user bandwidth is connections times rate. But > you can't limit the connection to one because (again my interpretation) > there can be multiple users behind one IP. Think of a university reading > your website. Thus I am more comfortable limiting bandwidth than I am > limiting the number of connections. ?The 512k rate limit is fine. I > wouldn't go any higher. > > > If I understand correctly, limit_req only works if the same connection > is used for each request. My goal with limit_conn and limit_conn_zone > would be to prevent someone from circumventing limit_req by opening a > new connection for each request. Given that, why would my > limit_conn/limit_conn_zone config be any different from my > limit_req/limit_req_zone config? > > - Grant > > > > Should I basically duplicate my limit_req and limit_req_zone > > directives into limit_conn and limit_conn_zone? In what sort of > > situation would someone not do that? > > > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Sep 12 21:49:30 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Sep 2016 22:49:30 +0100 Subject: Don't process requests containing folders In-Reply-To: References: <20160912202714.GE11677@daoine.org> Message-ID: <20160912214930.GF11677@daoine.org> On Mon, Sep 12, 2016 at 01:55:35PM -0700, Grant wrote: Hi there, > > If you want to match "requests with a second slash", do just that: > > > > location ~ ^/.*/ {} > > > > (the "^" is not necessary there, but I guess-without-testing that > > it helps.) > > When you say it helps, you mean for performance? Yes - I guess that anchoring this regex at a point where it will always match anyway, will do no harm. > > If you want to match "requests without a second slash", you could do > > > > location ~ ^/[^/]*$ {} > > > > but I suspect you'll be better off with the positive match, plus a > > "location /" for "all the rest". > > > I want to keep my location blocks to a minimum so I think I should use > the following as my last location block which will send all remaining > good requests to my backend: > > location ~ (^/[^/]*|.html)$ {} Yes, that should do what you describe. Note that the . is a metacharacter for "any one"; if you really want the five-character string ".html" at the end of the request, you should escape the . to \. > And let everything else match the following, most of which will 404 (cheaply): > > location / { internal; } Testing and measuring might show that "return 404;" is even cheaper than "internal;" in the cases where they have the same output. But if there are cases where the difference in output matters, or if the difference is not measurable, then leaving it as-is is fine. Cheers, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Mon Sep 12 22:30:01 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 12 Sep 2016 15:30:01 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: <20160912223001.5484629.85886.10299@lazygranch.com> Most of the chatter on the interwebs believes that the rate limit is per connection, so if some IP opens up multiple connections, they get more bandwidth.? It shouldn't be that hard to just test this by installing a manager and seeing what happens. I will give this a try tonight, but hopefully someone will beat me to it. Relevant post follows: ?----------- On 17 February 2014 10:02, Bozhidara Marinchovska wrote:? > My question is what may be the reason when downloading the example file with > download manager not to match limit_rate directive "Download managers" open multiple connections and grab different byte ranges of the same file across those connections. Nginx's limit_rate function limits the data transfer rate of a single connection.? ? http://mailman.nginx.org/pipermail/nginx/2014-February/042337.html ------- ? ? Original Message ? From: Richard Stanway Sent: Monday, September 12, 2016 2:39 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit-req and greedy UAs limit_req works with multiple connections, it is usually configured per IP using $binary_remote_addr. See http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone - you can use variables to set the key to whatever you like. limit_req generally helps protect eg your backend against request floods from a single IP and any amount of connections. limit_conn protects against excessive connections tying up resources on the webserver itself. On Mon, Sep 12, 2016 at 10:23 PM, Grant wrote: > ?https://www.nginx.com/blog/tuning-nginx/ > > ?I have far more faith in this write up regarding tuning than the anti-ddos, though both have similarities. > > My interpretation is the user bandwidth is connections times rate. But you can't limit the connection to one because (again my interpretation) there can be multiple users behind one IP. Think of a university reading your website. Thus I am more comfortable limiting bandwidth than I am limiting the number of connections. ?The 512k rate limit is fine. I wouldn't go any higher. If I understand correctly, limit_req only works if the same connection is used for each request.? My goal with limit_conn and limit_conn_zone would be to prevent someone from circumventing limit_req by opening a new connection for each request.? Given that, why would my limit_conn/limit_conn_zone config be any different from my limit_req/limit_req_zone config? - Grant > Should I basically duplicate my limit_req and limit_req_zone > directives into limit_conn and limit_conn_zone? In what sort of > situation would someone not do that? > > - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From emailgrant at gmail.com Mon Sep 12 23:32:28 2016 From: emailgrant at gmail.com (Grant) Date: Mon, 12 Sep 2016 16:32:28 -0700 Subject: Don't process requests containing folders In-Reply-To: <20160912214930.GF11677@daoine.org> References: <20160912202714.GE11677@daoine.org> <20160912214930.GF11677@daoine.org> Message-ID: >> location ~ (^/[^/]*|.html)$ {} > > Yes, that should do what you describe. I realize now that I didn't define the requirement properly. I said: "match requests with a single / or ending in .html" but what I need is: "match requests with a single / *and* ending in .html, also match /". Will this do it: location ~ ^(/[^/]*\.html|/)$ {} > Note that the . is a metacharacter for "any one"; if you really want > the five-character string ".html" at the end of the request, you should > escape the . to \. Fixed. Do I ever need to escape / in location blocks? >> And let everything else match the following, most of which will 404 (cheaply): >> >> location / { internal; } > > Testing and measuring might show that "return 404;" is even cheaper than > "internal;" in the cases where they have the same output. But if there > are cases where the difference in output matters, or if the difference > is not measurable, then leaving it as-is is fine. I'm sure you're right. I'll switch to: location / { return 404; } - Grant From cainjonm at gmail.com Tue Sep 13 04:29:21 2016 From: cainjonm at gmail.com (Cain) Date: Tue, 13 Sep 2016 16:29:21 +1200 Subject: Websockets - recommended settings question In-Reply-To: References: Message-ID: Hi, In the nginx documentation (https://www.nginx.com/blog/websocket-nginx), it is recommended to set the 'Connection' header to 'close' (if there is no upgrade header) - from my understanding, this disables keep alive from nginx to the upstream - is there a reason for this? Additionally, is keep alive the default behaviour when connecting to upstreams? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 13 06:13:59 2016 From: nginx-forum at forum.nginx.org (maltris) Date: Tue, 13 Sep 2016 02:13:59 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: References: Message-ID: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> The problem still seems to persist. I am now trying to investigate this myself. Any advise for debugging? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269498#msg-269498 From lists at lazygranch.com Tue Sep 13 06:54:01 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 12 Sep 2016 23:54:01 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160912223001.5484629.85886.10299@lazygranch.com> References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> <20160912223001.5484629.85886.10299@lazygranch.com> Message-ID: <20160912235401.256f3667@linux-h57q.site> Seeing that nobody beat me to it, I did the download manager experiment. There are plugins for Chromium to do multiple connections, but I figured a stand alone program was safer. (No use adding strange software to a reasonable secure browser.) My linux disty has prozilla in the repo. In true linux tradition, the actual executable is not prozilla but rather proz. I requested 8 connections, but I could never get more than 5 running at a time. I allow 10 in the setup, so something else is the limiting factor. Be that as it may, I achieved multiple connections, which is all that is required to test the rate limiting. Using proz, I achieved about 4Mbps when all connections were running. Just downloading from the browser, the network manager reports rates of 500k to 600k Bytes/second. Conclusion: nginx rate limiting is not "gamed" by using multiple connections to download ONE file using a file manager. The next experiment is to download two different files at 4 connections each with the file manager. I got 1.1mbps and 1.4mbps, which when summed together is actually less than the rate limit. Conclusion: nginx rate limiting still works with 8 connections. Someone else should try to duplicate this in the event it has something to do with my setup. On Mon, 12 Sep 2016 15:30:01 -0700 lists at lazygranch.com wrote: > Most of the chatter on the interwebs believes that the rate limit is > per connection, so if some IP opens up multiple connections, they get > more bandwidth.? > > It shouldn't be that hard to just test this by installing a manager > and seeing what happens. I will give this a try tonight, but > hopefully someone will beat me to it. > > Relevant post follows: > ?----------- > On 17 February 2014 10:02, Bozhidara Marinchovska > wrote:? > > My question is what may be the reason when downloading the example > > file with download manager not to match limit_rate directive > > "Download managers" open multiple connections and grab different byte > ranges of the same file across those connections. Nginx's limit_rate > function limits the data transfer rate of a single connection.? > > ? > http://mailman.nginx.org/pipermail/nginx/2014-February/042337.html > ------- > ? > ? Original Message ? > From: Richard Stanway > Sent: Monday, September 12, 2016 2:39 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: limit-req and greedy UAs > > limit_req works with multiple connections, it is usually configured > per IP using $binary_remote_addr. See > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone > - you can use variables to set the key to whatever you like. > > limit_req generally helps protect eg your backend against request > floods from a single IP and any amount of connections. limit_conn > protects against excessive connections tying up resources on the > webserver itself. > > On Mon, Sep 12, 2016 at 10:23 PM, Grant wrote: > > ?https://www.nginx.com/blog/tuning-nginx/ > > > > ?I have far more faith in this write up regarding tuning than the > > anti-ddos, though both have similarities. > > > > My interpretation is the user bandwidth is connections times rate. > > But you can't limit the connection to one because (again my > > interpretation) there can be multiple users behind one IP. Think of > > a university reading your website. Thus I am more comfortable > > limiting bandwidth than I am limiting the number of > > connections. ?The 512k rate limit is fine. I wouldn't go any higher. > > > If I understand correctly, limit_req only works if the same connection > is used for each request.? My goal with limit_conn and limit_conn_zone > would be to prevent someone from circumventing limit_req by opening a > new connection for each request.? Given that, why would my > limit_conn/limit_conn_zone config be any different from my > limit_req/limit_req_zone config? > > - Grant > > > > Should I basically duplicate my limit_req and limit_req_zone > > directives into limit_conn and limit_conn_zone? In what sort of > > situation would someone not do that? > > > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Sep 13 07:15:13 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 13 Sep 2016 03:15:13 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> References: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3d9c5d6b1dba35a0c8aba952ea0e117f.NginxMailingListEnglish@forum.nginx.org> I've noticed the same problem between Nginx Proxy (Win) and CentOS based Apache 2.4 Backends. So I finally changed all backends to nginx+php-fpm... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269500#msg-269500 From nginx-forum at forum.nginx.org Tue Sep 13 08:09:02 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 04:09:02 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers Message-ID: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> So I noticed some unusual stuff going on lately mostly to do with people using proxies to spoof / fake that files from my sites are hosted of their sites. Sitting behind CloudFlare the only decent way I can come up with to prevent these websites who use proxy_pass and proxy_set_header to pretend that files they are really hotlinking of my site is on and hosted by theirs is using Nginx's built in Anti-DDoS feature. Now if I was to use "$binary_remote_addr" I would end up blocking CloudFlare IP's from serving traffic but CloudFlare do provide us with the real IP address of users that pass through their service. It comes in the form of "HTTP_CF_CONNECTING_IP" But when it comes to limiting files that are being hot linked to break their servers from serving traffic they are stealing from mine I don't know if I should be using "$http_cf_connecting_ip" or the equivalent with "$binary_" ? limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m; limit_conn_zone $http_cf_connecting_ip zone=addr:10m; location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } So the above is my config that should work I have not tested it yet but I really wanted to know what the purpose of the "$binary_" on these would be and if i should make them resemble this. (Not even sure if the below is correct I am sure someone will correct me if "$binary_http_cf_connecting_ip" won't work.) limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; Thanks for reading :) looking forward to anyone's better idea's / solutions and also recommended changes to preventing stealing of my bandwidth on these kinds of static files that can be up to >=2GB in size. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269502#msg-269502 From lists at lazygranch.com Tue Sep 13 08:28:38 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 13 Sep 2016 01:28:38 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160912235401.256f3667@linux-h57q.site> References: <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> <20160912223001.5484629.85886.10299@lazygranch.com> <20160912235401.256f3667@linux-h57q.site> Message-ID: <20160913082838.5484629.90089.10314@lazygranch.com> ?Re-reading the ?original post, it was concluded that multiple connection don't effect the rate limiting. I interpreted this incorrectly the first time: ? "Nginx's limit_rate function limits the data transfer rate of a single connection.?" But I'm certain a few posts, perhaps not on the nginx forum, state incorrectly that the limiting is per individual connections rather than all the connections in total. ? From sosogh at mail.com Tue Sep 13 08:30:11 2016 From: sosogh at mail.com (sosogh) Date: Tue, 13 Sep 2016 16:30:11 +0800 Subject: upstream prematurely closed connection while reading response header from upstream Message-ID: <201609131630084279375@mail.com> Hi list My topology is : client ---> nginx 1.6.2 (port 80) ---> nginx 0.7.69 with mogilefs module (port 2080) ---> mogilefs . I want to upload a 8G file to mogilefs , the uploading URL is http://dfs.myclouds.com/upload/glance_prod_env/d29a0a4a-7888-487e-91b5-57e9bbf351e7 There are errors , I have enabled debuging in both nginx instances , but seems that they are not detailed enough. nginx 1.6.2 ====== config ------- server { listen 80; listen 8081; server_name dfs.myclouds.com; charset utf-8; ssi on; access_log /data2/log/nginx/dfs-1.6.2.access.log main; error_log /data2/log/nginx/dfs-1.6.2-debug.log debug; client_max_body_size 30g; send_timeout 1800; keepalive_timeout 1800; proxy_read_timeout 1800; proxy_send_timeout 1800; proxy_connect_timeout 1800; location /upload/ { expires -1; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 3600; proxy_send_timeout 3600; proxy_pass http://127.0.0.1:2080; } } debug log ------------- 2016/09/13 15:57:14 [warn] 20096#0: *6140596434 a client request body is buffered to a temporary file /usr/local/nginx-1.6.2/client_body_temp/0001956429, client: 10.21.176.4, server: dfs.myclouds.com, request: "PUT /upload/glance_prod_env/d29a0a4a-7888-487e-91b5-57e9bbf351e7 HTTP/1.1", host: "dfs.myclouds.com" 2016/09/13 16:00:17 [error] 20096#0: *6140596434 upstream prematurely closed connection while reading response header from upstream, client: 10.21.176.4, server: dfs.myclouds.com, request: "PUT /upload/glance_prod_env/d29a0a4a-7888-487e-91b5-57e9bbf351e7 HTTP/1.1", upstream: "http://127.0.0.1:2080/upload/glance_prod_env/d29a0a4a-7888-487e-91b5-57e9bbf351e7", host: "dfs.myclouds.com" nginx 0.7.69 ========= config ------ server { listen 2080; server_name dfs.myclouds.com; charset utf-8; ssi on; access_log /data2/log/nginx/dfs2.access.log main; error_log /data2/log/nginx/error2.log debug; client_max_body_size 30g; send_timeout 1800; keepalive_timeout 1800; proxy_read_timeout 1800; proxy_send_timeout 1800; proxy_connect_timeout 1800; location /upload/ { mogilefs_tracker 127.0.0.1:7001; mogilefs_domain mycloudsdfs; mogilefs_methods PUT DELETE; mogilefs_pass { proxy_pass $mogilefs_path; proxy_hide_header Content-Type; proxy_buffering off; } } client_body_temp_path /data/nginx-0.7.69-client_body_temp; mogilefs_tracker 127.0.0.1:7001; mogilefs_domain mycloudsdfs; mogilefs_methods PUT DELETE; mogilefs_pass { proxy_pass $mogilefs_path; proxy_hide_header Content-Type; proxy_buffering off; } } } debug log ------------ 2016/09/13 15:58:43 [warn] 8786#0: *3629426 a client request body is buffered to a temporary file /data/nginx-0.7.69-client_body_temp/0000007407, client: 127.0.0.1, server: dfs.myclouds.com, request: "PUT /upload/glance_prod_env/d29a0a4a-7888-487e-91b5-57e9bbf351e7 HTTP/1.1", host: "dfs.myclouds.com" My question is that : How can I find out what cause this problem "upstream prematurely closed connection while reading response header from upstream". Thank you ! sosogh -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Sep 13 08:33:09 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 13 Sep 2016 01:33:09 -0700 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> References: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160913083309.5484629.52760.10318@lazygranch.com> ?What about Roboo? It requires a cookie on the website before the download takes place. (My usual warning this is my understanding of how it works, but I have no first hand knowledge.) I presume the hot linkers won't have the cookie. https://github.com/yuri-gushin/Roboo ? Original Message ? From: c0nw0nk Sent: Tuesday, September 13, 2016 1:09 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers So I noticed some unusual stuff going on lately mostly to do with people using proxies to spoof / fake that files from my sites are hosted of their sites. Sitting behind CloudFlare the only decent way I can come up with to prevent these websites who use proxy_pass and proxy_set_header to pretend that files they are really hotlinking of my site is on and hosted by theirs is using Nginx's built in Anti-DDoS feature. Now if I was to use "$binary_remote_addr" I would end up blocking CloudFlare IP's from serving traffic but CloudFlare do provide us with the real IP address of users that pass through their service. It comes in the form of "HTTP_CF_CONNECTING_IP" But when it comes to limiting files that are being hot linked to break their servers from serving traffic they are stealing from mine I don't know if I should be using "$http_cf_connecting_ip" or the equivalent with "$binary_" ? limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m; limit_conn_zone $http_cf_connecting_ip zone=addr:10m; location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } So the above is my config that should work I have not tested it yet but I really wanted to know what the purpose of the "$binary_" on these would be and if i should make them resemble this. (Not even sure if the below is correct I am sure someone will correct me if "$binary_http_cf_connecting_ip" won't work.) limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; Thanks for reading :) looking forward to anyone's better idea's / solutions and also recommended changes to preventing stealing of my bandwidth on these kinds of static files that can be up to >=2GB in size. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269502#msg-269502 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Sep 13 08:51:50 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 04:51:50 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <20160913083309.5484629.52760.10318@lazygranch.com> References: <20160913083309.5484629.52760.10318@lazygranch.com> Message-ID: I was going to do a cookie method but its bad because on browsers with no cookies that make legitimate requests (first time visitors maybe that don't have a cookie set) or browsers on legitimate users who disable cookies or use extensions / add-ons to only whitelist cookies from sites they specifically allow like facebook, youtube etc. So that's why I decide to peruse the connection and requests per second / min limits because it can't be spoofed by the server proxying / making the request. It is so easy for me to proxy and spoof those client headers its pretty funny. proxy_set_header "User-Agent" "custom agent"; proxy_set_header "Cookie" "cookiename=cookievalue"; proxy_set_header "Referer" "networkflare.com"; And my example above is why I am not trusting the client for anything and want to go with the one thing they can't fake to me their IP. gariac Wrote: ------------------------------------------------------- > ?What about Roboo? It requires a cookie on the website before the > download takes place. (My usual warning this is my understanding of > how it works, but I have no first hand knowledge.) I presume the hot > linkers won't have the cookie. > > https://github.com/yuri-gushin/Roboo > > ? Original Message ? > From: c0nw0nk > Sent: Tuesday, September 13, 2016 1:09 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's > servers > > So I noticed some unusual stuff going on lately mostly to do with > people > using proxies to spoof / fake that files from my sites are hosted of > their > sites. > > Sitting behind CloudFlare the only decent way I can come up with to > prevent > these websites who use proxy_pass and proxy_set_header to pretend that > files > they are really hotlinking of my site is on and hosted by theirs is > using > Nginx's built in Anti-DDoS feature. > > Now if I was to use "$binary_remote_addr" I would end up blocking > CloudFlare > IP's from serving traffic but CloudFlare do provide us with the real > IP > address of users that pass through their service. > It comes in the form of "HTTP_CF_CONNECTING_IP" > > But when it comes to limiting files that are being hot linked to break > their > servers from serving traffic they are stealing from mine I don't know > if I > should be using "$http_cf_connecting_ip" or the equivalent with > "$binary_" > ? > > limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $http_cf_connecting_ip zone=addr:10m; > > location ~ \.mp4$ { > limit_conn addr 10; #Limit open connections from same ip > limit_req zone=one; #Limit max number of requests from same ip > > mp4; > limit_rate_after 1m; #Limit download rate > limit_rate 1m; #Limit download rate > root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; > expires max; > valid_referers none blocked networkflare.com *.networkflare.com; > if ($invalid_referer) { > return 403; > } > } > > So the above is my config that should work I have not tested it yet > but I > really wanted to know what the purpose of the "$binary_" on these > would be > and if i should make them resemble this. (Not even sure if the below > is > correct I am sure someone will correct me if > "$binary_http_cf_connecting_ip" > won't work.) > > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; > > Thanks for reading :) looking forward to anyone's better idea's / > solutions > and also recommended changes to preventing stealing of my bandwidth on > these > kinds of static files that can be up to >=2GB in size. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,269502,269502#msg-269502 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269506#msg-269506 From nginx-forum at forum.nginx.org Tue Sep 13 09:26:54 2016 From: nginx-forum at forum.nginx.org (maltris) Date: Tue, 13 Sep 2016 05:26:54 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: <3d9c5d6b1dba35a0c8aba952ea0e117f.NginxMailingListEnglish@forum.nginx.org> References: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> <3d9c5d6b1dba35a0c8aba952ea0e117f.NginxMailingListEnglish@forum.nginx.org> Message-ID: hheiko Wrote: ------------------------------------------------------- > I've noticed the same problem between Nginx Proxy (Win) and CentOS > based Apache 2.4 Backends. So I finally changed all backends to > nginx+php-fpm... What version of nginx are you running on Windows? (Asking because I am just setting up a test environment for testing that.) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269507#msg-269507 From nginx-forum at forum.nginx.org Tue Sep 13 09:34:30 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 05:34:30 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <20160913083309.5484629.52760.10318@lazygranch.com> Message-ID: > gariac Wrote: > ------------------------------------------------------- > > ?What about Roboo? It requires a cookie on the website before the > > download takes place. (My usual warning this is my understanding of > > how it works, but I have no first hand knowledge.) I presume the > hot > > linkers won't have the cookie. > > > > https://github.com/yuri-gushin/Roboo On top of my previous posted example bypass that with a proxy_set_header Cookie "cookiename=cookievalue"; I don't know why anyone would use that if all it does it require a cookie to download you could achieve it even more simple like this. if ($http_cookie = "^$") { #If client has no cookies return 444; } Or as a whitelist. if ($cookie_cookiename != "cookievalue") { return 444; } But a fake proxy stealing your traffic can bypass that with this proxy_set_header Cookie "cookiename=cookievalue"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269508#msg-269508 From lists at lazygranch.com Tue Sep 13 09:34:39 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 13 Sep 2016 02:34:39 -0700 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <20160913083309.5484629.52760.10318@lazygranch.com> Message-ID: <20160913093439.5484629.4569.10328@lazygranch.com> ?I'm assuming at this point if cookies are too much, then logins or captcha aren't going to happen.? How about just blocking the offending websites at the firewall? I'm assuming you see the proxy and not the eyeballs at the ISP.? I have my hacker detection schemes in nginx. I flag the clowns, yank the IPs every day or so, and block the IP space of any VPS, colo, etc. ?I have blocked so much of the hacker IP space that I can go days before finding a new VPS/etc to feed the firewall. Amazon, Google hosting, Rackspace, Linode, Digital Ocean, Soft layer and especially Ubiquity/Nobis is probably 3/4 of the clowns. Machines are not eyeballs, or in your case, ear canals. Block 'em.? Oh yeah, I block Cloud Flare. ? Original Message ? From: c0nw0nk Sent: Tuesday, September 13, 2016 1:52 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers I was going to do a cookie method but its bad because on browsers with no cookies that make legitimate requests (first time visitors maybe that don't have a cookie set) or browsers on legitimate users who disable cookies or use extensions / add-ons to only whitelist cookies from sites they specifically allow like facebook, youtube etc. So that's why I decide to peruse the connection and requests per second / min limits because it can't be spoofed by the server proxying / making the request. It is so easy for me to proxy and spoof those client headers its pretty funny. proxy_set_header "User-Agent" "custom agent"; proxy_set_header "Cookie" "cookiename=cookievalue"; proxy_set_header "Referer" "networkflare.com"; And my example above is why I am not trusting the client for anything and want to go with the one thing they can't fake to me their IP. gariac Wrote: ------------------------------------------------------- > ?What about Roboo? It requires a cookie on the website before the > download takes place. (My usual warning this is my understanding of > how it works, but I have no first hand knowledge.) I presume the hot > linkers won't have the cookie. > > https://github.com/yuri-gushin/Roboo > > ? Original Message ? > From: c0nw0nk > Sent: Tuesday, September 13, 2016 1:09 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's > servers > > So I noticed some unusual stuff going on lately mostly to do with > people > using proxies to spoof / fake that files from my sites are hosted of > their > sites. > > Sitting behind CloudFlare the only decent way I can come up with to > prevent > these websites who use proxy_pass and proxy_set_header to pretend that > files > they are really hotlinking of my site is on and hosted by theirs is > using > Nginx's built in Anti-DDoS feature. > > Now if I was to use "$binary_remote_addr" I would end up blocking > CloudFlare > IP's from serving traffic but CloudFlare do provide us with the real > IP > address of users that pass through their service. > It comes in the form of "HTTP_CF_CONNECTING_IP" > > But when it comes to limiting files that are being hot linked to break > their > servers from serving traffic they are stealing from mine I don't know > if I > should be using "$http_cf_connecting_ip" or the equivalent with > "$binary_" > ? > > limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $http_cf_connecting_ip zone=addr:10m; > > location ~ \.mp4$ { > limit_conn addr 10; #Limit open connections from same ip > limit_req zone=one; #Limit max number of requests from same ip > > mp4; > limit_rate_after 1m; #Limit download rate > limit_rate 1m; #Limit download rate > root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; > expires max; > valid_referers none blocked networkflare.com *.networkflare.com; > if ($invalid_referer) { > return 403; > } > } > > So the above is my config that should work I have not tested it yet > but I > really wanted to know what the purpose of the "$binary_" on these > would be > and if i should make them resemble this. (Not even sure if the below > is > correct I am sure someone will correct me if > "$binary_http_cf_connecting_ip" > won't work.) > > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; > > Thanks for reading :) looking forward to anyone's better idea's / > solutions > and also recommended changes to preventing stealing of my bandwidth on > these > kinds of static files that can be up to >=2GB in size. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,269502,269502#msg-269502 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269506#msg-269506 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Sep 13 09:51:36 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 05:51:36 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <20160913093439.5484629.4569.10328@lazygranch.com> References: <20160913093439.5484629.4569.10328@lazygranch.com> Message-ID: <587ad4caf30489e967de596a370d0e2e.NginxMailingListEnglish@forum.nginx.org> gariac Wrote: ------------------------------------------------------- > ?I'm assuming at this point if cookies are too much, then logins or > captcha aren't going to happen.? > > How about just blocking the offending websites at the firewall? I'm > assuming you see the proxy and not the eyeballs at the ISP.? > > I have my hacker detection schemes in nginx. I flag the clowns, yank > the IPs every day or so, and block the IP space of any VPS, colo, etc. > ?I have blocked so much of the hacker IP space that I can go days > before finding a new VPS/etc to feed the firewall. Amazon, Google > hosting, Rackspace, Linode, Digital Ocean, Soft layer and especially > Ubiquity/Nobis is probably 3/4 of the clowns. Machines are not > eyeballs, or in your case, ear canals. Block 'em.? > > Oh yeah, I block Cloud Flare. That is really excessive / over the top and holds the potential to block legitimate traffic besides with the service cloudflare offer they are fine but it is very unknown how they handle these kind of fake proxy requests and how many connections / limits on requests per second they allow from them. Since you say you are building yourself a blacklist perhaps you will like this. (especially those who are blocked for infinity) https://en.wikipedia.org/wiki/Wikipedia:Database_reports/Range_blocks My solution in my first post will work and is decent for what I want to achieve I really want to know what the "$binary_" is and if I should use that. Instead in my "limit_req" and "limit_conn" fields. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269510#msg-269510 From nginx-forum at forum.nginx.org Tue Sep 13 09:52:55 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 13 Sep 2016 05:52:55 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: References: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> <3d9c5d6b1dba35a0c8aba952ea0e117f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <367ce7b1b2767158dcf7e9e03cf3149b.NginxMailingListEnglish@forum.nginx.org> I don't think there is an OS relation on the frontend, the same problem occurs with an Centos Nginx as Reverse proxy in front of 3 Apache backends on Centos - but it never occurs on windows based Apache backends... But we?re on version 1.11.4.1 Lion (http://nginx-win.ecsds.eu) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269511#msg-269511 From anoopalias01 at gmail.com Tue Sep 13 10:04:58 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 13 Sep 2016 15:34:58 +0530 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: <367ce7b1b2767158dcf7e9e03cf3149b.NginxMailingListEnglish@forum.nginx.org> References: <8d5078c980d7d9f58f3cd8f17beb33ee.NginxMailingListEnglish@forum.nginx.org> <3d9c5d6b1dba35a0c8aba952ea0e117f.NginxMailingListEnglish@forum.nginx.org> <367ce7b1b2767158dcf7e9e03cf3149b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Check the logs of the apache server. You might need to tweak the proxy_*_timeout settings in nginx , but usually its the problem with the upstream server that is causing this. try connecting to the upstream via http://domain:port directly and you should face the error. On Tue, Sep 13, 2016 at 3:22 PM, hheiko wrote: > I don't think there is an OS relation on the frontend, the same problem > occurs with an Centos Nginx as Reverse proxy in front of 3 Apache backends > on Centos - but it never occurs on windows based Apache backends... > > But we?re on version 1.11.4.1 Lion (http://nginx-win.ecsds.eu) > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269511#msg-269511 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias From nginx-forum at forum.nginx.org Tue Sep 13 11:16:32 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 07:16:32 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> References: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <75c98427239d82a59ccb822a36394268.NginxMailingListEnglish@forum.nginx.org> I just found the following : https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$binary_ To conserve the space occupied by the key we use $binary_remote_addr It evaluates into a binary value of the remote IP address So it seems I should be doing this instead to keep the key in memory for that IP small to reduce the memory footprint. limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269513#msg-269513 From nginx-forum at forum.nginx.org Tue Sep 13 11:25:13 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 13 Sep 2016 07:25:13 -0400 Subject: "502 Bad Gateway" on first request in a setup with Apache 2.4-servers as upstreams In-Reply-To: References: Message-ID: <3ca27141452d7cf14d63c288042c9876.NginxMailingListEnglish@forum.nginx.org> I've played with proxy timeout settings, no luck. And nothing was logged on the backend-server. Finally I've found something in the firewall log: May 27 10:25:06 APZRP01 kernel: DROP: IN=APZRP01 OUT= MAC=c4:34:6b:af:19:64:e8:65:49:28:08:77:08:00 SRC=10.59.55.245 DST=192.168.57.14 LEN=40 TOS=0x00 PREC=0x00 TTL=128 ID=24114 DF PROTO=TCP SPT=39134 DPT=80 WINDOW=0 RES=0x00 ACK RST URGP=0 disabling iptables and changing to firewalld on centos - nothing changed. Searching on google for this problem brings up some reports for this behaviour, but not a solution. The problem is easy to reproduce, start nginx proxy, access the upstream - no problem. wait an hour,... 2016/05/26 23:35:32 [error] 4908#5116: *27528 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream,... after a few seconds the backend responds again perfectly for hours, but after a time of no traffic the problem occurs again, with 10 and 60 seconds on the proxy_timeout Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268306,269514#msg-269514 From r at roze.lv Tue Sep 13 11:25:15 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 13 Sep 2016 14:25:15 +0300 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <75c98427239d82a59ccb822a36394268.NginxMailingListEnglish@forum.nginx.org> References: <688ddb71fc82f649a8c46843a26929a7.NginxMailingListEnglish@forum.nginx.org> <75c98427239d82a59ccb822a36394268.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I just found the following : > https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$binary_ > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; There is no such concept of prepending $binary_* to anything. $binary_remote_addr is just a specific nginx variable ( http://nginx.org/en/docs/http/ngx_http_core_module.html#variables ) rr From nginx-forum at forum.nginx.org Tue Sep 13 12:07:18 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 08:07:18 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: Message-ID: Reinis Rozitis Wrote: ------------------------------------------------------- > > I just found the following : > > > https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$ > binary_ > > > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m > rate=30r/m; > > limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; > > > There is no such concept of prepending $binary_* to anything. > > $binary_remote_addr is just a specific nginx variable ( > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables ) > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx But that book says it is to reduce the memory footprint ? Even other sources such as this for example : http://serverfault.com/a/487473 Say not using it would increase memory I would rather have low memory impact especially in a scenario where there are lots of requests that will quickly fill up the memory the less it uses and more memory available the better. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269516#msg-269516 From nginx-forum at forum.nginx.org Tue Sep 13 12:17:21 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 08:17:21 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: Message-ID: <0d7e3f8cbda2e1e1c20a6101944e669e.NginxMailingListEnglish@forum.nginx.org> Reinis Rozitis Wrote: ------------------------------------------------------- > > I just found the following : > > > https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$ > binary_ > > > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m > rate=30r/m; > > limit_conn_zone $binary_http_cf_connecting_ip zone=addr:10m; > > > There is no such concept of prepending $binary_* to anything. > > $binary_remote_addr is just a specific nginx variable ( > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables ) > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I think i understand and am with you now i can't use $binary infront of any request other than remote_addr because nginx does not support compressing others. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269517#msg-269517 From r at roze.lv Tue Sep 13 12:24:14 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 13 Sep 2016 15:24:14 +0300 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: Message-ID: <1370F7BF998A4E2B975355B753E90FF6@MezhRoze> > But that book says it is to reduce the memory footprint ? Correct, but that is for that specific varible. You can't take $http_cf_connecting_ip which is a HTTP header comming from Cloudflare and prepend $binary_ just to "lower memory footprint". There is no such functionality. What you might do is still use $binary_remote_addr but in combination with RealIP module ( http://nginx.org/en/docs/http/ngx_http_realip_module.html ): real_ip_header CF-Connecting-IP; Detailed guide from Cloudflare: ( https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-restore-original-visitor-IP-with-Nginx- ) Theoretically it should work but to be sure you would need to test it or ask a nginx dev for confirmation if the realip module takes precedence and updates also the ip binary variable before the limit_req module. rr From mdounin at mdounin.ru Tue Sep 13 13:01:11 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Sep 2016 16:01:11 +0300 Subject: Websockets - recommended settings question In-Reply-To: References: Message-ID: <20160913130111.GF1527@mdounin.ru> Hello! On Tue, Sep 13, 2016 at 04:29:21PM +1200, Cain wrote: > In the nginx documentation (https://www.nginx.com/blog/websocket-nginx), it > is recommended to set the 'Connection' header to 'close' (if there is no > upgrade header) - from my understanding, this disables keep alive from > nginx to the upstream - is there a reason for this? > > Additionally, is keep alive the default behaviour when connecting to > upstreams? "Connection: close" is the default, unless you've explicitly configured keepalive and set it to an empty value, see docs here: http://nginx.org/r/keepalive -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Sep 13 13:08:15 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 09:08:15 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <1370F7BF998A4E2B975355B753E90FF6@MezhRoze> References: <1370F7BF998A4E2B975355B753E90FF6@MezhRoze> Message-ID: <4cd146f85a615576910e58d741006aa8.NginxMailingListEnglish@forum.nginx.org> Reinis Rozitis Wrote: ------------------------------------------------------- > > But that book says it is to reduce the memory footprint ? > > Correct, but that is for that specific varible. > > You can't take $http_cf_connecting_ip which is a HTTP header comming > from > Cloudflare and prepend $binary_ just to "lower memory footprint". > There is no such functionality. > > > What you might do is still use $binary_remote_addr but in combination > with > RealIP module ( > http://nginx.org/en/docs/http/ngx_http_realip_module.html ): > > real_ip_header CF-Connecting-IP; > > Detailed guide from Cloudflare: > ( > https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-re > store-original-visitor-IP-with-Nginx- > ) > > > Theoretically it should work but to be sure you would need to test it > or ask > a nginx dev for confirmation if the realip module takes precedence and > > updates also the ip binary variable before the limit_req module. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks for the info :) For now I will just stick with what I know is currently working either way I believe the stored key in memory won't be compressed due to being behind cloudflare's reverse proxy as you said only $binary_remote_addr is compressing their IP to reduce memory footprint. Here is my config for anyone who wants to test or play around same as in original email. map $http_cf_connecting_ip $client_ip_from_cf { default $http_cf_connecting_ip; } limit_req_zone $client_ip_from_cf zone=one:10m rate=30r/m; limit_conn_zone $client_ip_from_cf zone=addr:10m; location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269521#msg-269521 From reallfqq-nginx at yahoo.fr Tue Sep 13 14:05:00 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 13 Sep 2016 16:05:00 +0200 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <4cd146f85a615576910e58d741006aa8.NginxMailingListEnglish@forum.nginx.org> References: <1370F7BF998A4E2B975355B753E90FF6@MezhRoze> <4cd146f85a615576910e58d741006aa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: You were just told the best way to get a meaningful $binary_remote_addr variable using CloudFlare, with the added bonus of a list of network ranges to use with set_real_ip_from to only filter out CloudFlare's IP addresses as sources to be repalced and avoid false positives. Using the $binary_remote_addr variable takes less space inside your fixed-sized zone, thus allowing to store more entries. I suggest you carefully read on the impacts of filling-up the zone memory and why using as little data per client is highly advised in limit_req_zone directive docs as you do not seem to know what you are doing... --- *B. R.* On Tue, Sep 13, 2016 at 3:08 PM, c0nw0nk wrote: > Reinis Rozitis Wrote: > ------------------------------------------------------- > > > But that book says it is to reduce the memory footprint ? > > > > Correct, but that is for that specific varible. > > > > You can't take $http_cf_connecting_ip which is a HTTP header comming > > from > > Cloudflare and prepend $binary_ just to "lower memory footprint". > > There is no such functionality. > > > > > > What you might do is still use $binary_remote_addr but in combination > > with > > RealIP module ( > > http://nginx.org/en/docs/http/ngx_http_realip_module.html ): > > > > real_ip_header CF-Connecting-IP; > > > > Detailed guide from Cloudflare: > > ( > > https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-re > > store-original-visitor-IP-with-Nginx- > > ) > > > > > > Theoretically it should work but to be sure you would need to test it > > or ask > > a nginx dev for confirmation if the realip module takes precedence and > > > > updates also the ip binary variable before the limit_req module. > > > > rr > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > Thanks for the info :) For now I will just stick with what I know is > currently working either way I believe the stored key in memory won't be > compressed due to being behind cloudflare's reverse proxy as you said only > $binary_remote_addr is compressing their IP to reduce memory footprint. > > Here is my config for anyone who wants to test or play around same as in > original email. > > map $http_cf_connecting_ip $client_ip_from_cf { > default $http_cf_connecting_ip; > } > > limit_req_zone $client_ip_from_cf zone=one:10m rate=30r/m; > limit_conn_zone $client_ip_from_cf zone=addr:10m; > > location ~ \.mp4$ { > limit_conn addr 10; #Limit open connections from same ip > limit_req zone=one; #Limit max number of requests from same ip > > mp4; > limit_rate_after 1m; #Limit download rate > limit_rate 1m; #Limit download rate > root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; > expires max; > valid_referers none blocked networkflare.com *.networkflare.com; > if ($invalid_referer) { > return 403; > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269502,269521#msg-269521 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 13 14:41:32 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 10:41:32 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: Message-ID: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> B.R. Wrote: ------------------------------------------------------- > You were just told the best way to get a meaningful > $binary_remote_addr > variable using CloudFlare, with the added bonus of a list of network > ranges > to use with set_real_ip_from to only filter out CloudFlare's IP > addresses > as sources to be repalced and avoid false positives. > > Using the $binary_remote_addr variable takes less space inside your > fixed-sized zone, thus allowing to store more entries. > I suggest you carefully read on the impacts of filling-up the zone > memory > and why using as little data per client is highly advised in > limit_req_zone > q_zone> > directive docs as you do not seem to know what you are doing... > --- > *B. R.* > > On Tue, Sep 13, 2016 at 3:08 PM, c0nw0nk > wrote: > > > Reinis Rozitis Wrote: > > ------------------------------------------------------- > > > > But that book says it is to reduce the memory footprint ? > > > > > > Correct, but that is for that specific varible. > > > > > > You can't take $http_cf_connecting_ip which is a HTTP header > comming > > > from > > > Cloudflare and prepend $binary_ just to "lower memory footprint". > > > There is no such functionality. > > > > > > > > > What you might do is still use $binary_remote_addr but in > combination > > > with > > > RealIP module ( > > > http://nginx.org/en/docs/http/ngx_http_realip_module.html ): > > > > > > real_ip_header CF-Connecting-IP; > > > > > > Detailed guide from Cloudflare: > > > ( > > > > https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-re > > > store-original-visitor-IP-with-Nginx- > > > ) > > > > > > > > > Theoretically it should work but to be sure you would need to test > it > > > or ask > > > a nginx dev for confirmation if the realip module takes precedence > and > > > > > > updates also the ip binary variable before the limit_req module. > > > > > > rr > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > Thanks for the info :) For now I will just stick with what I know is > > currently working either way I believe the stored key in memory > won't be > > compressed due to being behind cloudflare's reverse proxy as you > said only > > $binary_remote_addr is compressing their IP to reduce memory > footprint. > > > > Here is my config for anyone who wants to test or play around same > as in > > original email. > > > > map $http_cf_connecting_ip $client_ip_from_cf { > > default $http_cf_connecting_ip; > > } > > > > limit_req_zone $client_ip_from_cf zone=one:10m rate=30r/m; > > limit_conn_zone $client_ip_from_cf zone=addr:10m; > > > > location ~ \.mp4$ { > > limit_conn addr 10; #Limit open connections from same ip > > limit_req zone=one; #Limit max number of requests from same ip > > > > mp4; > > limit_rate_after 1m; #Limit download rate > > limit_rate 1m; #Limit download rate > > root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; > > expires max; > > valid_referers none blocked networkflare.com *.networkflare.com; > > if ($invalid_referer) { > > return 403; > > } > > } > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,269502,269521#msg-269521 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yes I can't test it at the moment unfortunately with the realip module due to the fact i use "itpp2012" Nginx builds http://nginx-win.ecsds.eu/ They do not come compiled with the realip module (for now ?) My above config I have tested and works great I do wish to leave a smaller memory footprint how ever but not really anyway I can do that currently. But I can increase the zone size I have a total of 32gb of ram and I don't know how big the foot print of a single request is but I doubt it will fill up that much ? But from my understanding of the earlier email all I will require is this added to my config (hope it is just that single line) real_ip_header CF-Connecting-IP; limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; limit_conn_zone $binary_remote_addr zone=addr:10m; location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } And that should be all would be a pain if I have to manually include the cloudflare Ip's too since when ever they add more servers to their network and new geological locations / datacenter to serve traffic from would mean those locations will be blocked until I add their IP's in. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269528#msg-269528 From mdounin at mdounin.ru Tue Sep 13 15:51:07 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Sep 2016 18:51:07 +0300 Subject: nginx-1.11.4 Message-ID: <20160913155107.GO1527@mdounin.ru> Changes with nginx 1.11.4 13 Sep 2016 *) Feature: the $upstream_bytes_received variable. *) Feature: the $bytes_received, $session_time, $protocol, $status, $upstream_addr, $upstream_bytes_sent, $upstream_bytes_received, $upstream_connect_time, $upstream_first_byte_time, and $upstream_session_time variables in the stream module. *) Feature: the ngx_stream_log_module. *) Feature: the "proxy_protocol" parameter of the "listen" directive, the $proxy_protocol_addr and $proxy_protocol_port variables in the stream module. *) Feature: the ngx_stream_realip_module. *) Bugfix: nginx could not be built with the stream module and the ngx_http_ssl_module, but without ngx_stream_ssl_module; the bug had appeared in 1.11.3. *) Feature: the IP_BIND_ADDRESS_NO_PORT socket option was not used; the bug had appeared in 1.11.2. *) Bugfix: in the "ranges" parameter of the "geo" directive. *) Bugfix: an incorrect response might be returned when using the "aio threads" and "sendfile" directives; the bug had appeared in 1.9.13. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Tue Sep 13 16:03:28 2016 From: emailgrant at gmail.com (Grant) Date: Tue, 13 Sep 2016 09:03:28 -0700 Subject: limit-req and greedy UAs In-Reply-To: <20160913082838.5484629.90089.10314@lazygranch.com> References: <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> <20160912223001.5484629.85886.10299@lazygranch.com> <20160912235401.256f3667@linux-h57q.site> <20160913082838.5484629.90089.10314@lazygranch.com> Message-ID: > ?Re-reading the original post, it was concluded that multiple connection don't effect the rate limiting. I interpreted this incorrectly the first time: > ? > "Nginx's limit_rate > function limits the data transfer rate of a single connection.?" > > But I'm certain a few posts, perhaps not on the nginx forum, state incorrectly that the limiting is per individual connections rather than all the connections in total. ? Nice job. Very good to know. - Grant From emailgrant at gmail.com Tue Sep 13 16:07:42 2016 From: emailgrant at gmail.com (Grant) Date: Tue, 13 Sep 2016 09:07:42 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: > limit_req works with multiple connections, it is usually configured per IP > using $binary_remote_addr. See > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone > - you can use variables to set the key to whatever you like. > > limit_req generally helps protect eg your backend against request floods > from a single IP and any amount of connections. limit_conn protects against > excessive connections tying up resources on the webserver itself. Perfectly understood. Thank you Richard. - Grant From kworthington at gmail.com Tue Sep 13 17:38:08 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 13 Sep 2016 13:38:08 -0400 Subject: [nginx-announce] nginx-1.11.4 In-Reply-To: <20160913155111.GP1527@mdounin.ru> References: <20160913155111.GP1527@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.4 for Windows https://kevinworthington.com/nginxwin1114 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 13, 2016 at 11:51 AM, Maxim Dounin wrote: > Changes with nginx 1.11.4 13 Sep > 2016 > > *) Feature: the $upstream_bytes_received variable. > > *) Feature: the $bytes_received, $session_time, $protocol, $status, > $upstream_addr, $upstream_bytes_sent, $upstream_bytes_received, > $upstream_connect_time, $upstream_first_byte_time, and > $upstream_session_time variables in the stream module. > > *) Feature: the ngx_stream_log_module. > > *) Feature: the "proxy_protocol" parameter of the "listen" directive, > the $proxy_protocol_addr and $proxy_protocol_port variables in the > stream module. > > *) Feature: the ngx_stream_realip_module. > > *) Bugfix: nginx could not be built with the stream module and the > ngx_http_ssl_module, but without ngx_stream_ssl_module; the bug had > appeared in 1.11.3. > > *) Feature: the IP_BIND_ADDRESS_NO_PORT socket option was not used; the > bug had appeared in 1.11.2. > > *) Bugfix: in the "ranges" parameter of the "geo" directive. > > *) Bugfix: an incorrect response might be returned when using the "aio > threads" and "sendfile" directives; the bug had appeared in 1.9.13. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 13 19:36:51 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 13 Sep 2016 15:36:51 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: > Yes I can't test it at the moment unfortunately with the realip module > due to the fact i use "itpp2012" Nginx builds > http://nginx-win.ecsds.eu/ They do not come compiled with the realip > module (for now ?) Of course this module is compiled in. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269541#msg-269541 From nginx-forum at forum.nginx.org Tue Sep 13 20:07:51 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 16:07:51 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <387d4fc51dfbd8b4ae1c6648e260018c.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > c0nw0nk Wrote: > > Yes I can't test it at the moment unfortunately with the realip > module > > due to the fact i use "itpp2012" Nginx builds > > http://nginx-win.ecsds.eu/ They do not come compiled with the > realip > > module (for now ?) > > Of course this module is compiled in. Oh in that case then in didn't work when i tried it with the following configuration. http { set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 104.16.0.0/12; set_real_ip_from 108.162.192.0/18; set_real_ip_from 131.0.72.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 162.158.0.0/15; set_real_ip_from 172.64.0.0/13; set_real_ip_from 173.245.48.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 190.93.240.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 199.27.128.0/21; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32; set_real_ip_from 2c0f:f248::/32; set_real_ip_from 2a06:98c0::/29; real_ip_header CF-Connecting-IP; limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; limit_conn_zone $binary_remote_addr zone=addr:10m; #End http block } location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269543#msg-269543 From francis at daoine.org Tue Sep 13 23:21:52 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 14 Sep 2016 00:21:52 +0100 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <387d4fc51dfbd8b4ae1c6648e260018c.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <387d4fc51dfbd8b4ae1c6648e260018c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160913232152.GG11677@daoine.org> On Tue, Sep 13, 2016 at 04:07:51PM -0400, c0nw0nk wrote: Hi there, > Oh in that case then in didn't work when i tried it with the following > configuration. It looks like configuration like this should probably work; but perhaps some parts were lost in the copy-paste. However, if you have the chance to test, could you try adding location = /test { return 200 "x-forwarded-for=:$http_x_forwarded_for: cf-connecting-ip=:$http_cf_connecting_ip:\n"; } the the appropriate server{} block, and running curl -H X-Forwarded-For:1.2.3.4 -H CF-Connecting-IP:2.3.4.5 http://your-server/test and seeing what the output is? If my reading of https://support.cloudflare.com/hc/en-us/articles/200170986 is correct, I think you should see x-forwarded-for having two values (I suspect that 1.2.3.4 will be first, despite what that web page says) and cf-connecting-ip having a value which does not include 2.3.4.5. If that "single-valued cf-connecting-ip" is true, then you should be able to omit all of the set_real_ip_from directives without breaking your config. (And therefore, you will not need to worry about keeping the list of them up to date. > limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; > limit_conn_zone $binary_remote_addr zone=addr:10m; For what it's worth: quick tests here show that stock nginx *does* correctly set $binary_remote_addr to be a compact representation of $remote_addr, even when real_ip_header is being used. So what you are trying to do, can work. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 13 23:52:59 2016 From: nginx-forum at forum.nginx.org (vlad0) Date: Tue, 13 Sep 2016 19:52:59 -0400 Subject: Cache always in "UPDATING" Message-ID: Dear list, I'm having a problem that comes on goes the past months: The cache does not get updated as it seems to stay in "UPDATING" state. This issue comes at random times/days. Here's what i got in strace: accept4(12, {sa_family=AF_INET, sin_port=htons(58777), sin_addr=inet_addr("127.0.0.2")}, [16], SOCK_NONBLOCK) = 21 epoll_ctl(5, EPOLL_CTL_ADD, 21, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=3067699929, u64=579221740737618649}}) = 0 accept4(12, 0xbff7246d, [110], SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable) epoll_wait(5, {{EPOLLIN, {u32=3067699929, u64=579221740737618649}}}, 512, 10000) = 1 gettimeofday({1473804868, 803688}, NULL) = 0 recv(21, "GET / HTTP/1.1\r\nUser-Agent: curl/7.38.0\r\nAccept: */*\r\nHost: XXX\r\n\r\n", 1024, 0) = 77 gettimeofday({1473804868, 803772}, {0, 0}) = 0 epoll_ctl(5, EPOLL_CTL_MOD, 21, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3067699929, u64=578894043322868441}}) = 0 open("/cache/2/6c/6c73f0510dc53eb46a3a903dbd2436c3", O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 23 fstat64(23, {st_mode=S_IFREG|0600, st_size=22807, ...}) = 0 futex(0xb69707c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb697078, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb697054, FUTEX_WAKE_PRIVATE, 1) = 1 epoll_wait(5, {{EPOLLOUT, {u32=3067699929, u64=578894043322868441}}, {EPOLLIN, {u32=151317344, u64=13218850954218367840}}}, 512, -1) = 2 gettimeofday({1473804868, 803935}, NULL) = 0 gettimeofday({1473804868, 803966}, {0, 0}) = 0 futex(0xb69707c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb697078, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb697054, FUTEX_WAKE_PRIVATE, 1) = 1 epoll_wait(5, {{EPOLLIN, {u32=151317344, u64=13218850954218367840}}}, 512, 12000) = 1 gettimeofday({1473804868, 804045}, NULL) = 0 writev(21, [{"HTTP/1.1 200 OK\r\nServer:...............[....] getsockname(21, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.2")}, [16]) = 0 write(18,...............[....] close(23) = 0 setsockopt(21, SOL_TCP, TCP_NODELAY, [1], 4) = 0 recv(21, 0xb20c838, 1024, 0) = -1 EAGAIN (Resource temporarily unavailable) epoll_wait(5, {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=3067699929, u64=578894043322868441}}}, 512, 15000) = 1 gettimeofday({1473804868, 805467}, NULL) = 0 recv(21, "", 1024, 0) = 0 close(21) = 0 and debug log (highly edited, left what i think is important) 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http cache key: "httpwww.XXXXXGET/" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 add cleanup: 0B899EF0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache exists: 0 e:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 cache file: "/cache/2/6c/6c73f0510dc53eb46a3a903dbd2436c3" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 add cleanup: 0B899F34 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache fd: 18 2016/09/13 22:26:27 [debug] 18410#18410: *131592 thread read: 18, 0AF4D188, 1046, 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http upstream cache: -2 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http finalize request: -4, "/?" a:1, c:3 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http request count:3 blk:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http finalize request: -4, "/?" a:1, c:2 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http request count:2 blk:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 post event 0B8D36A0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 delete posted event 0B8D36A0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http run request: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache thread: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 thread read: 18, 0AF4D188, 1046, 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache expired: 5 1473770937 1473805587 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http upstream cache: 5 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy status 200 "200 OK" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Date: Tue, 13 Sep 2016 12:45:57 GMT" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Server: Apache/2.2.22" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Set-Cookie: XXX" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Vary: Accept-Encoding" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Connection: close" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header: "Content-Type: text/html" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http proxy header done 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache send: /cache/2/6c/6c73f0510dc53eb46a3a903dbd2436c3 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http subs filter header "/" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 malloc: 0B1FD720:32768 2016/09/13 22:26:27 [debug] 18410#18410: *131592 malloc: 0B47A5E0:32768 2016/09/13 22:26:27 [debug] 18410#18410: *131592 HTTP/1.1 200 OK 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0AF4DBF0, pos 0AF4DBF0, size: 171 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http write filter: l:0 f:0 s:171 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http output filter "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http copy filter: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 malloc: 0B6D93B8:21761 2016/09/13 22:26:27 [debug] 18410#18410: *131592 thread read: 18, 0B6D93B8, 21761, 1046 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http copy filter: -2 "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http finalize request: -2, "/?" a:1, c:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 event timer add: 10: 12000:631817156 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http run request: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http writer handler: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http output filter "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http copy filter: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 thread read: 18, 0B6D93B8, 21761, 1046 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http subs filter "/" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs in buffer:0AF4DCF0, size:21761, flush:0, last_buf:0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs process in buffer: 0AF4DCF0 21761, line_in buffer: 0AF4DB3C 0 [..subs..] 2016/09/13 22:26:27 [debug] 18410#18410: *131592 match counts: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 find linefeed: 0B6DE8B1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 match counts: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 find linefeed: 00000000 2016/09/13 22:26:27 [debug] 18410#18410: *131592 the last buffer, not find linefeed 2016/09/13 22:26:27 [debug] 18410#18410: *131592 match counts: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DDA8, size:4096, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DDEC, size:4096, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DE30, size:4096, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DE74, size:4096, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DEB8, size:4096, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out buffer:0AF4DEFC, size:1340, t:0, l:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http postpone filter "/?" 0AF4DDA0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 4096 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 4096 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 4096 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 4096 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 4096 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http chunk: 1340 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write old buf t:1 f:0 0AF4DBF0, pos 0AF4DBF0, size: 171 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0AF4DFA4, pos 0AF4DFA4, size: 6 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 posix_memalign: 0AEC0330:4096 @16 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0B6A6E48, pos 0B6A6E48, size: 4096 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0B04C848, pos 0B04C848, size: 4096 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0B37BE20, pos 0B37BE20, size: 4096 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0B7B8C00, pos 0B7B8C00, size: 4096 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0AF06900, pos 0AF06900, size: 4096 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:1 f:0 0B6646F0, pos 0B6646F0, size: 1340 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 write new buf t:0 f:0 00000000, pos 08A62A9D, size: 7 file: 0, size: 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http write filter: l:1 f:1 s:22004 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http write filter limit 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 writev: 22004 of 22004 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http write filter 00000000 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DDA8 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DDEC 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DE30 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DE74 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DEB8 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 subs out end: 0AF4DEFC 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http copy filter: -2 "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http writer output filter: -2, "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http writer done: "/?" 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http finalize request: -2, "/?" a:1, c:1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 event timer del: 10: 631817156 2016/09/13 22:26:27 [debug] 18410#18410: *131592 set http keepalive handler 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http close request 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http log handler 2016/09/13 22:26:27 [debug] 18410#18410: *131592 ngx_http_testcookie_ok_variable 2016/09/13 22:26:27 [debug] 18410#18410: *131592 run cleanup: 0B899F34 2016/09/13 22:26:27 [debug] 18410#18410: *131592 file cleanup: fd:18 2016/09/13 22:26:27 [debug] 18410#18410: *131592 run cleanup: 0B899EF0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache cleanup 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http file cache free, fd: 18 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B6646F0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0AF06900 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B7B8C00 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B37BE20 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B04C848 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B6A6E48 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B6D93B8 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B47A5E0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B1FD720 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B899000, unused: 3 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0AF4D000, unused: 4 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0AEC0330, unused: 3702 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B20C838 2016/09/13 22:26:27 [debug] 18410#18410: *131592 hc free: 00000000 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 hc busy: 00000000 0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 tcp_nodelay 2016/09/13 22:26:27 [debug] 18410#18410: *131592 reusable connection: 1 2016/09/13 22:26:27 [debug] 18410#18410: *131592 event timer add: 10: 15000:631820156 2016/09/13 22:26:27 [debug] 18410#18410: *131592 post event 0AF97550 2016/09/13 22:26:27 [debug] 18410#18410: *131592 delete posted event 0AF97550 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http keepalive handler 2016/09/13 22:26:27 [debug] 18410#18410: *131592 malloc: 0B20C838:1024 2016/09/13 22:26:27 [debug] 18410#18410: *131592 recv: fd:10 -1 of 1024 2016/09/13 22:26:27 [debug] 18410#18410: *131592 recv() not ready (11: Resource temporarily unavailable) 2016/09/13 22:26:27 [debug] 18410#18410: *131592 free: 0B20C838 2016/09/13 22:26:27 [debug] 18410#18410: *131592 post event 0AF97550 2016/09/13 22:26:27 [debug] 18410#18410: *131592 post event 0B8D36A0 2016/09/13 22:26:27 [debug] 18410#18410: *131592 delete posted event 0AF97550 2016/09/13 22:26:27 [debug] 18410#18410: *131592 http keepalive handler 2016/09/13 22:26:27 [debug] 18410#18410: *131592 malloc: 0B20C838:1024 2016/09/13 22:26:27 [debug] 18410#18410: *131592 recv: fd:10 0 of 1024 2016/09/13 22:26:27 [info] 18410#18410: *131592 client 127.0.0.2 closed keepalive connection We use epoll and aio threads. There is a subs_filter in the context and an "if" that doesn't match. We have very short timeouts. However we also use quite a lot of modules. Is it possible that a module crashed at a wrong moment and caused a request to be left in "updating" ? Restarting nginx fixes the issue (until it happens again). nginx/1.10.1 Any idea why it is finding the cache as being in "updating" status? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269545,269545#msg-269545 From nginx-forum at forum.nginx.org Wed Sep 14 00:02:08 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 13 Sep 2016 20:02:08 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > c0nw0nk Wrote: > > Yes I can't test it at the moment unfortunately with the realip > module > > due to the fact i use "itpp2012" Nginx builds > > http://nginx-win.ecsds.eu/ They do not come compiled with the > realip > > module (for now ?) > > Of course this module is compiled in. I take it the module is a part of the Nginx.exe build and not Nginx_basic.exe Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269546#msg-269546 From mdounin at mdounin.ru Wed Sep 14 01:02:01 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Sep 2016 04:02:01 +0300 Subject: Cache always in "UPDATING" In-Reply-To: References: Message-ID: <20160914010201.GU1527@mdounin.ru> Hello! On Tue, Sep 13, 2016 at 07:52:59PM -0400, vlad0 wrote: > I'm having a problem that comes on goes the past months: > The cache does not get updated as it seems to stay in "UPDATING" state. This > issue comes at random times/days. [...] > We use epoll and aio threads. There is a subs_filter in the context and an > "if" that doesn't match. > We have very short timeouts. > > However we also use quite a lot of modules. Is it possible that a module > crashed at a wrong moment and caused a request to be left in "updating" ? Yes. If there are any crashes you are quite likely to see various problems with cache, including cache items stuck in the "UPDATING" state. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Sep 14 02:14:32 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 13 Sep 2016 22:14:32 -0400 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? Message-ID: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> Tried compiling nginScript with nginx 1.11.4 as a dynamic module and the README github example at https://github.com/nginx/njs/blob/master/README gives me js_run unknown directive so looks like maybe didn't install correctly ? CentOS 7.2 64bit nginx -V nginx version: nginx/1.11.4 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with LibreSSL 2.4.2 TLS SNI support enabled configure arguments: --with-ld-opt='-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-m64 -mtune=native -mfpmath=sse -g -O3 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_stub_status_module --with-http_secure_link_module --add-module=../nginx-module-vts --with-libatomic --with-http_gzip_static_module --add-dynamic-module=../ngx_brotli --add-dynamic-module=../ngx_pagespeed-release-1.11.33.3-beta --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module --add-dynamic-module=../njs/nginx --with-stream_geoip_module --with-stream_realip_module --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.0 --add-module=../ngx_cache_purge-2.3 --add-module=../ngx_devel_kit-0.3.0 --add-module=../set-misc-nginx-module-0.31 --add-module=../echo-nginx-module-0.60 --add-module=../redis2-nginx-module-0.13 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.17 --add-module=../srcache-nginx-module-0.31 --add-module=../headers-more-nginx-module-0.31 --with-pcre=../pcre-8.39 --with-pcre-jit --with-http_ssl_module --with-http_v2_module --with-openssl=../libressl-2.4.2 with loaded modules as load_module "modules/ngx_http_brotli_filter_module.so"; load_module "modules/ngx_http_brotli_static_module.so"; load_module "modules/ngx_http_image_filter_module.so"; load_module "modules/ngx_http_fancyindex_module.so"; load_module "modules/ngx_pagespeed.so"; load_module "modules/ngx_stream_module.so"; load_module "modules/ngx_http_js_module.so"; load_module "modules/ngx_stream_js_module.so"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269548,269548#msg-269548 From vbart at nginx.com Wed Sep 14 02:26:04 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 14 Sep 2016 05:26:04 +0300 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> References: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> Message-ID: <13406136.SvR1oR8lP8@vbart-laptop> On Tuesday 13 September 2016 22:14:32 George wrote: > Tried compiling nginScript with nginx 1.11.4 as a dynamic module and the > README github example at https://github.com/nginx/njs/blob/master/README > gives me js_run unknown directive so looks like maybe didn't install > correctly ? [..] There's no "js_run" directive. Check the README. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed Sep 14 02:30:28 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 13 Sep 2016 22:30:28 -0400 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: <13406136.SvR1oR8lP8@vbart-laptop> References: <13406136.SvR1oR8lP8@vbart-laptop> Message-ID: sorry i meant from old example readme at http://hg.nginx.org/njs/file/11d4d66851ed/README Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269548,269550#msg-269550 From nginx-forum at forum.nginx.org Wed Sep 14 02:32:10 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 13 Sep 2016 22:32:10 -0400 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: References: <13406136.SvR1oR8lP8@vbart-laptop> Message-ID: <983a3c9df2d1bd779dbaa09adf6895ee.NginxMailingListEnglish@forum.nginx.org> even location /njs { js_run " var res; res = $r.response; res.status = 200; res.send('Hello World!'); res.finish(); "; } gives an error nginx -t nginx: [emerg] unknown directive "js_run" in /usr/local/nginx/conf/conf.d/virtual.conf:36 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269548,269551#msg-269551 From nginx-forum at forum.nginx.org Wed Sep 14 02:35:58 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 13 Sep 2016 22:35:58 -0400 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> References: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> Message-ID: <86495b19597a55e7f787e1467770f7e5.NginxMailingListEnglish@forum.nginx.org> and examples in wiki for nginxScript for js_run https://www.nginx.com/resources/wiki/nginScript/#section-1-overview Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269548,269552#msg-269552 From nginx-forum at forum.nginx.org Wed Sep 14 04:48:26 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 14 Sep 2016 00:48:26 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > I take it the module is a part of the Nginx.exe build and not > Nginx_basic.exe If its part of stock its also part of the basic version. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269553#msg-269553 From igor at sysoev.ru Wed Sep 14 06:13:21 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 14 Sep 2016 09:13:21 +0300 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: <983a3c9df2d1bd779dbaa09adf6895ee.NginxMailingListEnglish@forum.nginx.org> References: <13406136.SvR1oR8lP8@vbart-laptop> <983a3c9df2d1bd779dbaa09adf6895ee.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 14 Sep 2016, at 05:32, George wrote: > even > > > location /njs { > js_run " > var res; > res = $r.response; > res.status = 200; > res.send('Hello World!'); > res.finish(); > "; > } > > gives an error > > nginx -t > nginx: [emerg] unknown directive "js_run" in > /usr/local/nginx/conf/conf.d/virtual.conf:36 > nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed Interface has been changed. Now you should define a function in a file: function hw(req, res) { var res; ... } Then include the file with js_include file.js; Then use the function to generate content: location /njs { js_content hw; } -- Igor Sysoev http://nginx.com From igor at sysoev.ru Wed Sep 14 06:14:05 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 14 Sep 2016 09:14:05 +0300 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: <86495b19597a55e7f787e1467770f7e5.NginxMailingListEnglish@forum.nginx.org> References: <0633036aeefa8f83ef453c69fc759772.NginxMailingListEnglish@forum.nginx.org> <86495b19597a55e7f787e1467770f7e5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1643FC09-0532-4DA9-92CC-17C8D979682A@sysoev.ru> On 14 Sep 2016, at 05:35, George wrote: > and examples in wiki for nginxScript for js_run > https://www.nginx.com/resources/wiki/nginScript/#section-1-overview The examples are obsolete, we will update them soon. -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Wed Sep 14 06:19:25 2016 From: nginx-forum at forum.nginx.org (jchannon) Date: Wed, 14 Sep 2016 02:19:25 -0400 Subject: nginx not returning updated headers from origin server on conditional GET In-Reply-To: <20160912145743.GA1527@mdounin.ru> References: <20160912145743.GA1527@mdounin.ru> Message-ID: <44842103de01567d7a0fa5b99ab87c04.NginxMailingListEnglish@forum.nginx.org> NGINX authors might want to read this thread. Essentially Mark is saying that this is a bug https://twitter.com/darrel_miller/status/775684549858697216 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269457,269556#msg-269556 From nginx-forum at forum.nginx.org Wed Sep 14 07:34:50 2016 From: nginx-forum at forum.nginx.org (George) Date: Wed, 14 Sep 2016 03:34:50 -0400 Subject: nginScript + nginx 1.11.4, js_run unknown directive ? In-Reply-To: References: Message-ID: <35e376f14e69df46f09f6341f73bc086.NginxMailingListEnglish@forum.nginx.org> Hi Igor thanks for the clarification. Looking forward to updated examples/wiki for nginScript :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269548,269559#msg-269559 From nginx-forum at forum.nginx.org Wed Sep 14 08:10:04 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 14 Sep 2016 04:10:04 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: Il test further with it but it definitely did not work with the following using nginx_basic.exe (it was blocking the cloudflare server IP's from connecting) http { #Inside http real_ip_header CF-Connecting-IP; limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m; limit_conn_zone $binary_remote_addr zone=addr:10m; server { # server domain etc here location ~ \.mp4$ { limit_conn addr 10; #Limit open connections from same ip limit_req zone=one; #Limit max number of requests from same ip mp4; limit_rate_after 1m; #Limit download rate limit_rate 1m; #Limit download rate root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; expires max; valid_referers none blocked networkflare.com *.networkflare.com; if ($invalid_referer) { return 403; } } #End server block } #End http block } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269562#msg-269562 From nginx-forum at forum.nginx.org Wed Sep 14 10:52:21 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 14 Sep 2016 06:52:21 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: A simple test from here: http://hg.nginx.org/nginx-tests/rev/4e6d21192037 passes and works as it should even with the basic version, also have a look at: http://serverfault.com/questions/409155/x-real-ip-header-empty-with-nginx-behind-a-load-balancer Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269568#msg-269568 From daniel at mostertman.org Wed Sep 14 11:05:08 2016 From: daniel at mostertman.org (=?UTF-8?Q?Dani=c3=abl_Mostertman?=) Date: Wed, 14 Sep 2016 13:05:08 +0200 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <88b6b7fe-e2e6-d30f-0bf9-b5071e9869b8@mostertman.org> On 2016-09-14 02:02, c0nw0nk wrote: > I take it the module is a part of the Nginx.exe build and not > Nginx_basic.exe The fact that nginx comes with the module, and that it is available at build-time, does not mean it's built along. Parameter --with-http_realip_module must be passed to configure, at least. Not sure how it is for these builds, can't test them. Doesn't nginx_basic.exe support the -V parameter? Does it display configure options? Check if it's there. From r at roze.lv Wed Sep 14 11:32:55 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 14 Sep 2016 14:32:55 +0300 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Il test further with it but it definitely did not work with the following > using nginx_basic.exe (it was blocking the cloudflare server IP's from > connecting) I'm not familiar with windows version of nginx .. but it's clear you have all the required modules. If nginx is blocking something at least we know that the current configuration somewhat works. To debug it is better to start with a minimal version. First of all - which error is returned to the cloudflare server? Is it 503 which would come from the limit_* modules or is it 403 which would come from an invalid referer? rr From nginx-forum at forum.nginx.org Wed Sep 14 12:23:27 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 14 Sep 2016 08:23:27 -0400 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yeah the reason it does not work behind CloudFlare is because the limit_conn and limit_req is blocking the CloudFlare server IP for making to many requests. So that is why i am reciving the DOS output "503 service unavailable" And I don't fancy building a whitelist of IP's since it would require manually updating allot. The cloudflare server IP's would need excluding from the $binary_remote_addr output. Currently i am using my first method and it works great. c0nw0nk Wrote: ------------------------------------------------------- > limit_req_zone $http_cf_connecting_ip zone=one:10m rate=30r/m; > limit_conn_zone $http_cf_connecting_ip zone=addr:10m; > > location ~ \.mp4$ { > limit_conn addr 10; #Limit open connections from same ip > limit_req zone=one; #Limit max number of requests from same ip > > mp4; > limit_rate_after 1m; #Limit download rate > limit_rate 1m; #Limit download rate > root '//172.168.0.1/StorageServ1/server/networkflare/public_www'; > expires max; > valid_referers none blocked networkflare.com *.networkflare.com; > if ($invalid_referer) { > return 403; > } > } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269502,269572#msg-269572 From reallfqq-nginx at yahoo.fr Wed Sep 14 13:00:16 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 14 Sep 2016 15:00:16 +0200 Subject: Keeping your Nginx limit_* Anti-DDoS behind CloudFlare's servers In-Reply-To: References: <8131c6bfd7cbcb74072ecc699b4304a1.NginxMailingListEnglish@forum.nginx.org> <06f5df973fc12d16d5374538329447f2.NginxMailingListEnglish@forum.nginx.org> <56d2b29bf348430877a69e9426dc06dc.NginxMailingListEnglish@forum.nginx.org> <856f0a729ff4d3bd2d13e02a0ad113aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Sep 14, 2016 at 2:23 PM, c0nw0nk wrote: > Yeah the reason it does not work behind CloudFlare is because the > limit_conn > and limit_req is blocking the CloudFlare server IP for making to many > requests. So that is why i am reciving the DOS output "503 service > unavailable" > ?Misconfiguration. ? > And I don't fancy building a whitelist of IP's since it would require > manually updating allot. The cloudflare server IP's would need excluding > from the $binary_remote_addr output. > ?Void argument. If you did your howework, you would have realized the list provided in the example is taken from CloudFlare's published IP address, which are also conveniently delivered as text format to ease the job of automatic grabbing. You'll have to choose if you want to fully automate the verification/update of those IP addresses? ?or if you want to introduce manual check/action in the process.? Currently i am using my first method and it works great. > ?It has been several times you have been stating that already. There is no point in asking for help if you won't listen to the answers. Glad with your resource-greedy unoptimized way?? ?Fine. End of transmission. Others who are seeking for the best practices regarding combining limit_req, limit_rate dans the realip module will find all the information already available. Best of luck in your proceedings, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 14 15:12:57 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Sep 2016 18:12:57 +0300 Subject: nginx not returning updated headers from origin server on conditional GET In-Reply-To: <44842103de01567d7a0fa5b99ab87c04.NginxMailingListEnglish@forum.nginx.org> References: <20160912145743.GA1527@mdounin.ru> <44842103de01567d7a0fa5b99ab87c04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160914151257.GX1527@mdounin.ru> Hello! On Wed, Sep 14, 2016 at 02:19:25AM -0400, jchannon wrote: > NGINX authors might want to read this thread. Essentially Mark is saying > that this is a bug > https://twitter.com/darrel_miller/status/775684549858697216 The fact that headers are not merged is one of the main reasons why proxy_cache_revalidate is not switched on by default. As for the headers specifically mentioned in this thread: - nginx do update Date header (actually, this is the only header updated); - nginx do not support Age header at all, see https://trac.nginx.org/nginx/ticket/146. -- Maxim Dounin http://nginx.org/ From jschaeffer0922 at gmail.com Wed Sep 14 16:13:06 2016 From: jschaeffer0922 at gmail.com (Joshua Schaeffer) Date: Wed, 14 Sep 2016 10:13:06 -0600 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: References: <3477a195-397a-f9af-1a7b-7ee7e44af000@andreasschulze.de> Message-ID: Okay, I've made some headway on this but I've hit a road block. I've setup a test Nginx server and compiled the spnego-http-auth-nginx-module ( https://github.com/stnoonan/spnego-http-auth-nginx-module). I've updated two location blocks in my site's configuration file to use the module to authentication with Kerberos: # Test location location / { root /var/git; index index.html; # BASIC AUTH # #auth_basic "Restricted"; #auth_basic_user_file /var/git/.htpasswd; # KERBEROS AUTH # auth_gss on; auth_gss_realm HARMONYWAVE.COM; auth_gss_keytab /etc/krb5.keytab; auth_gss_service_name http/mutalisk.harmonywave.com; } # Static repo files for cloning over https location ~ ^.*\.git/objects/([0-9a-f]+/[ 0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /var/git; } # Requests that need to get to git-http-backend location ~ ^.*\.git/(HEAD|info/refs| objects/info/.*|git-(upload|receive)-pack)$ { # BASIC AUTH # #auth_basic "Restricted"; #auth_basic_user_file /var/git/.htpasswd; # KERBEROS AUTH # auth_gss on; auth_gss_realm HARMONYWAVE.COM; auth_gss_keytab /etc/krb5.keytab; auth_gss_service_name http/mutalisk.harmonywave.com; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /var/git/; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param REMOTE_USER $remote_user; include fastcgi_params; } When I try to access the "/" location block from a web browser from within my network it works. The browser asks me for credentials and if I provide them correctly then Nginx passes the index.html file. However when I try to do a git clone to my test repo it always fails with invalid credentials. I've collected two sets of logs and attached them. All the 01_* logs are when I tried to do a git clone to my test repo (and failed). All the 02_* logs are when I successfully logged into my test location ("/"). There are four logs for each set of logs (Nginx - error.log and access.log, Kerberos - krb5kdc.log, and LDAP - slapd.log) Looking at the logs I see two interesting events. First comparing Nginx's log file I see this: - Both logs show a "401 unauthorized" when requesting the respective resources (this is as expected) - After the 401 it looks like they Nginx is waiting for credentials (again, expected), however when I request my "/" block it calls the spnego-http-auth-nginx-module module and returns a successful authentication attempt, while when I run a git clone it doesn't (note that both my web browser and git actually ask me for my credentials): ? Second, the Kerberos logs show a "LOOKING_UP_SERVER" error when I try to do a git clone, while when try to just access the "/" block it successfully issues a ticket (I'm assuming that it does that because the spnego module is called successfully from Nginx). - 01_krb5kdc.log: Sep 14 08:26:15 immortal krb5kdc[1210](info): TGS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.32.2: LOOKING_UP_SERVER: authtime 0, jschaeffer at HARMONYWAVE.COM for HTTP/mutalisk.harmonywave.com at HARMONYWAVE.COM, Server not found in Kerberos database Sep 14 08:26:15 immortal krb5kdc[1210](info): TGS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.32.2: LOOKING_UP_SERVER: authtime 0, jschaeffer at HARMONYWAVE.COM for HTTP/mutalisk.harmonywave.com at HARMONYWAVE.COM, Server not found in Kerberos database - 02_krb5kdc.log: Sep 14 08:56:57 immortal krb5kdc[1210](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.10.3: NEEDED_PREAUTH: jschaeffer at HARMONYWAVE.COM for krbtgt/HARMONYWAVE.COM at HARMONYWAVE.COM, Additional pre-authentication required Sep 14 08:56:57 immortal krb5kdc[1210](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.10.3: ISSUE: authtime 1473865017, etypes {rep=18 tkt=18 ses=18}, jschaeffer at HARMONYWAVE.COM for krbtgt/ HARMONYWAVE.COM at HARMONYWAVE.COM Sep 14 08:56:57 immortal krb5kdc[1210](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.10.3: NEEDED_PREAUTH: jschaeffer at HARMONYWAVE.COM for krbtgt/HARMONYWAVE.COM at HARMONYWAVE.COM, Additional pre-authentication required Sep 14 08:56:57 immortal krb5kdc[1210](info): AS_REQ (6 etypes {18 17 16 23 25 26}) 10.1.10.3: ISSUE: authtime 1473865017, etypes {rep=18 tkt=18 ses=18}, jschaeffer at HARMONYWAVE.COM for krbtgt/ HARMONYWAVE.COM at HARMONYWAVE.COM I've looked around but I couldn't really find a good explanation of what "LOOKING_UP_SERVER" error means in my situation and I've never seen the error myself before. I guess where I'm really stuck right now is why does my git block not call this like my "/" block does: 2016/09/14 08:56:57 [debug] 14254#14254: *3 Basic auth credentials supplied by client 2016/09/14 08:56:57 [debug] 14254#14254: *3 Attempting authentication with principal jschaeffer 2016/09/14 08:56:57 [debug] 14254#14254: *3 Setting $remote_user to jschaeffer 2016/09/14 08:56:57 [debug] 14254#14254: *3 ngx_http_auth_spnego_set_bogus_authorization: bogus user set 2016/09/14 08:56:57 [debug] 14254#14254: *3 ngx_http_auth_spnego_basic: returning NGX_OK 2016/09/14 08:56:57 [debug] 14254#14254: *3 Basic auth succeeded On Mon, Sep 12, 2016 at 1:52 PM, Joshua Schaeffer wrote: > > On Mon, Sep 12, 2016 at 1:37 PM, A. Schulze wrote: > >> >> >> Am 12.09.2016 um 21:33 schrieb Joshua Schaeffer: >> >>> Any chance anybody has played around with Kerberos auth? Currently my SSO >>> environment uses GSSAPI for most authentication. >>> >> >> I compile also the module https://github.com/stnoonan/sp >> nego-http-auth-nginx-module >> but I've no time to configure / learn how to configure it >> ... unfortunately ... > > > I did actually see this module as well, but didn't look into it too much. > Perhaps it would be best for me to take a closer look and then report back > on what I find. > > Thanks, > Joshua Schaeffer > > Any help would be appreciated. Thanks Joshua Schaeffer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Nginx_kerberos_auth1.png Type: image/png Size: 137369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs.tar.gz Type: application/x-gzip Size: 6329 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Wed Sep 14 17:03:14 2016 From: nginx-forum at forum.nginx.org (drook) Date: Wed, 14 Sep 2016 13:03:14 -0400 Subject: no live upstreams and NO previous error Message-ID: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> Hi. I've set up a multiple upstream configuration with nginx as a load balancer. And yup, I'm getting 'no live upstreams' in the error log. Like in 1-3% of requests. And yes, I know how this works: nginx is marking a backend in an upstream as dead when he receives an error from it, and these errors are configured with proxy_next_upstream; plus, when all of the servers in an upstream group are under such holddowns, you will get this error. So if you're getting these errors, basically all you need is to fix the root cause of them, like timeouts and various 50x, and 'no live upstreams' will be long gone. But in my case I'm getting these like all of a sudden. I would be happy to see some timeouts, or 50x from backends and so on. Nope, I'm getting these: 2016/09/14 20:27:58 [error] 46898#100487: *49484 no live upstreams while connecting to upstream, client: xx.xx.xx.xx, server: foo.bar, request: "POST /mixed/json HTTP/1.1", upstream: "http://backends/mixed/json", host: "foo.bar" And in the access log these: xx.xx.xx.xx - - [14/Sep/2016:20:27:58 +0300] foo.bar "POST /mixed/json HTTP/1.1" 502 198 "-" "-" 0.015 backends 502 - and the most funny thing is that I'm getting a bulk of these requests, and previous ones are 200. It really looks like the upstream group is switching for no reason to a dead state, and, since I don't believe in miracles, I think that there must be a cause for that, only that nginx for some reason doesn't log it. So, my question is, if this isn't caused by the HTTP errors (since I don't see the errors on the backends) - can this be caused by a sudden lack of l3 connectivity ? Like tcp connections dropped, some intermediary packet filters and so on ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269577,269577#msg-269577 From nginx-forum at forum.nginx.org Wed Sep 14 21:47:04 2016 From: nginx-forum at forum.nginx.org (dizballanze) Date: Wed, 14 Sep 2016 17:47:04 -0400 Subject: 3rd party module for generating avatars on-the-fly Message-ID: Hi folks, I am happy to announce my first module for nginx - ngx_http_avatars_gen_module. It uses libcairo to generate avatars based on use initials. Check it out on github - https://github.com/dizballanze/ngx_http_avatars_gen_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269578,269578#msg-269578 From tseveendorj at gmail.com Thu Sep 15 00:41:55 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Thu, 15 Sep 2016 09:41:55 +0900 Subject: Run time php variable change Message-ID: Hello, I try to explain what I want to do. I have website which is needed php max_execution_time should be different on action. default max_execution_time = 30 seconds but I need to increase execution time 60 seconds on some location or action http://example.com/request Is it possible to do that on nginx to php-fpm ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From balaji.viswanathan at gmail.com Thu Sep 15 06:13:51 2016 From: balaji.viswanathan at gmail.com (Balaji Viswanathan) Date: Thu, 15 Sep 2016 11:43:51 +0530 Subject: Re-balancing Upstreams in TCP Loadbalancer Message-ID: Hello Nginx Users, I am running nginx as a TCP load balancer. I am trying to find a way to redistribute client TCP connections to upstream servers, specifically, rebalance the load on the upstream servers (on some event) when clients are using persistent TCP connections. The scenario is as follows Application protocol - Clients and Servers use a stateful application protocol on top of TCP which is resilient to TCP disconnections. ie., the client and server do application level acks and so, if some 'unit' of work is not completely transferred. it will get retransfered by the client. Persistent TCP connections - . The client opens TCP connections which are persistent. With few bytes being transferred intermittently. Getting the latest data quickly is of importance, hence i would like to avoid frequent (re)connections (both due to connection setup overhead and varying resource usage). Typical connection last for days. Maintenance/Downtime - When one of the upstream servers is shutdown for maintenance, all it's client connections break, clients reconnect and switch to one of the remaining active upstream servers. When the upstream is brought back up post maintenance, the load isnt redistributed. ie., existing connections (since they are persistent) remain with other servers. Only new connections can go to the new server. This is more pronounced in 2 upstream server setup...where all connections switch between servers....kind of like thundering herd problem. I would like to have the ability to terminate some/all client connections explicitly and have them reconnect back. I understand that with nginx maintaining 2 connections for every client, there might not be a 'clean' time to close the connection, but since there is an application ack on top...an unclean termination is acceptable. I currently have to restart nginx to rebalance the upstreams which effectively is the same. Restarting all upstream servers and synchronizing their startup is non-trivial. So is signalling all clients(1000s) to close and reconnect. In Nginx, i can achieve this partially by disabling keepalive on nginx listen port (so_keepalive=off) and then having least_conn as the load-balancer method on my upstream. However, this is not desirable in steady state (see persistent TCP connections above), and even though connections get evenly distributed...the load might no be...as idle and busy clients will end up with different upstreams. Nginx plus features like, "On the fly configuration" upstream_conf allows one to change the upstream configuration, but it doesnt affect existing connections, even if a server is marked as down. "Draining of sessions" is only applicable to http requests and not to TCP connections. Did anyone else face such a problem? How did you resolve it? Any pointers will be much appreciated. thanks, balaji -- -- Balaji Viswanathan Bangalore India -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 15 06:49:08 2016 From: nginx-forum at forum.nginx.org (drookie) Date: Thu, 15 Sep 2016 02:49:08 -0400 Subject: no live upstreams and NO previous error In-Reply-To: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> References: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> (yup, it's still the author of the original post, but my other browser just remembers another set of credentials). If I increase verbosity of the error_log, I'm seeing additional messages in log, like upstream server temporarily disabled while reading response header from but this message doesn't explain why the upstream server was disabled. I understand that the error occured, but what exaclty ? I'm used to see timeouts instead, or some other explicit problem. This looks totally mysterios for me. Could someone shine some light on it ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269577,269583#msg-269583 From nginx-forum at forum.nginx.org Thu Sep 15 07:00:53 2016 From: nginx-forum at forum.nginx.org (drookie) Date: Thu, 15 Sep 2016 03:00:53 -0400 Subject: no live upstreams and NO previous error In-Reply-To: <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> References: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52766bba1d689c04b709c4bf6d23cdfc.NginxMailingListEnglish@forum.nginx.org> Oh, solved. Upstreams do respond with 500. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269577,269584#msg-269584 From black.fledermaus at arcor.de Thu Sep 15 07:20:24 2016 From: black.fledermaus at arcor.de (basti) Date: Thu, 15 Sep 2016 09:20:24 +0200 Subject: Run time php variable change In-Reply-To: References: Message-ID: Hello, you can use "fastcgi_param PHP_VALUE" to change PHP values. For example: location /foo { location ~ ^(.*.\.php)(.*)$ { fastcgi_buffers 4 256k; fastcgi_buffer_size 128k; fastcgi_param PHP_VALUE "max_execution_time = 60"; } } Best Regards, Basti On 15.09.2016 02:41, Tseveendorj Ochirlantuu wrote: > Hello, > > I try to explain what I want to do. I have website which is needed php > max_execution_time should be different on action. > > default max_execution_time = 30 seconds > > but I need to increase execution time 60 seconds on some location or action > > http://example.com/request > > Is it possible to do that on nginx to php-fpm ? > > Regards > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From tseveendorj at gmail.com Thu Sep 15 07:26:05 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Thu, 15 Sep 2016 16:26:05 +0900 Subject: Run time php variable change In-Reply-To: References: Message-ID: Hello, Basti thank you for help. Does this override system wide or it applied to /foo location ? Best regards, Tseveen On Thu, Sep 15, 2016 at 4:20 PM, basti wrote: > Hello, > > you can use "fastcgi_param PHP_VALUE" to change PHP values. > > For example: > > location /foo { > > location ~ ^(.*.\.php)(.*)$ { > fastcgi_buffers 4 256k; > fastcgi_buffer_size 128k; > fastcgi_param PHP_VALUE "max_execution_time = 60"; > } > } > > Best Regards, > Basti > > > On 15.09.2016 02:41, Tseveendorj Ochirlantuu wrote: > > Hello, > > > > I try to explain what I want to do. I have website which is needed php > > max_execution_time should be different on action. > > > > default max_execution_time = 30 seconds > > > > but I need to increase execution time 60 seconds on some location or > action > > > > http://example.com/request > > > > Is it possible to do that on nginx to php-fpm ? > > > > Regards > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Thu Sep 15 07:45:03 2016 From: black.fledermaus at arcor.de (basti) Date: Thu, 15 Sep 2016 09:45:03 +0200 Subject: Run time php variable change In-Reply-To: References: Message-ID: <9f771938-95df-3129-ec26-ed650741fdcd@arcor.de> It should work per location. I have nothing found in the docs at the moment. But be warned if you use more than one value here you must do something like fastcgi_param PHP_VALUE "register_globals=0 display_errors=0"; or fastcgi_param PHP_VALUE "register_globals=0\ndisplay_errors=0"; On 15.09.2016 09:26, Tseveendorj Ochirlantuu wrote: > Hello, > > Basti thank you for help. > > Does this override system wide or it applied to /foo location ? > > Best regards, > Tseveen > > On Thu, Sep 15, 2016 at 4:20 PM, basti > wrote: > > Hello, > > you can use "fastcgi_param PHP_VALUE" to change PHP values. > > For example: > > location /foo { > > location ~ ^(.*.\.php)(.*)$ { > fastcgi_buffers 4 256k; > fastcgi_buffer_size 128k; > fastcgi_param PHP_VALUE "max_execution_time = 60"; > } > } > > Best Regards, > Basti > > > On 15.09.2016 02:41, Tseveendorj Ochirlantuu wrote: > > Hello, > > > > I try to explain what I want to do. I have website which is needed php > > max_execution_time should be different on action. > > > > default max_execution_time = 30 seconds > > > > but I need to increase execution time 60 seconds on some location > or action > > > > http://example.com/request > > > > Is it possible to do that on nginx to php-fpm ? > > > > Regards > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From tseveendorj at gmail.com Thu Sep 15 07:48:38 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Thu, 15 Sep 2016 16:48:38 +0900 Subject: Run time php variable change In-Reply-To: <9f771938-95df-3129-ec26-ed650741fdcd@arcor.de> References: <9f771938-95df-3129-ec26-ed650741fdcd@arcor.de> Message-ID: Great. Thank you. On Thu, Sep 15, 2016 at 4:45 PM, basti wrote: > It should work per location. I have nothing found in the docs at the > moment. > But be warned if you use more than one value here you must do something > like > > fastcgi_param PHP_VALUE "register_globals=0 > display_errors=0"; > > or > > fastcgi_param PHP_VALUE "register_globals=0\ndisplay_errors=0"; > > On 15.09.2016 09:26, Tseveendorj Ochirlantuu wrote: > > Hello, > > > > Basti thank you for help. > > > > Does this override system wide or it applied to /foo location ? > > > > Best regards, > > Tseveen > > > > On Thu, Sep 15, 2016 at 4:20 PM, basti > > wrote: > > > > Hello, > > > > you can use "fastcgi_param PHP_VALUE" to change PHP values. > > > > For example: > > > > location /foo { > > > > location ~ ^(.*.\.php)(.*)$ { > > fastcgi_buffers 4 256k; > > fastcgi_buffer_size 128k; > > fastcgi_param PHP_VALUE "max_execution_time = 60"; > > } > > } > > > > Best Regards, > > Basti > > > > > > On 15.09.2016 02:41, Tseveendorj Ochirlantuu wrote: > > > Hello, > > > > > > I try to explain what I want to do. I have website which is needed > php > > > max_execution_time should be different on action. > > > > > > default max_execution_time = 30 seconds > > > > > > but I need to increase execution time 60 seconds on some location > > or action > > > > > > http://example.com/request > > > > > > Is it possible to do that on nginx to php-fpm ? > > > > > > Regards > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 15 08:42:06 2016 From: nginx-forum at forum.nginx.org (Jugurtha) Date: Thu, 15 Sep 2016 04:42:06 -0400 Subject: Proxy_cache with variable Message-ID: Hello dream team, I have problem when i use "proxy_cache" with a variable ! Using "proxy_cache_purge" (call to proxy_cache) directive with a variable seems to change that variable's value. Tested on the last version nginx/1.11.4 (on Sles11 SP3) I would change my name cache dynamically The following conf is OK : (the file is purged) ######################################################################## server { ..... location ~ /purge(/.*) { allow 127.0.0.1; deny all; set $cacheSelect carto; #echo "Zone:$cacheSelect"; //Display carto proxy_cache_purge $cacheSelect $host$1$is_args$args; //Return code 200 => File purged from the cache } ... } ######################################################################## But if i use "map" or "if" for change the cache variable, the problem appears : For example this URL : test.com/purge/librairies/test.js => I make sure the file exists in the cache before ######################################################################## map $uri $select_cache { default 'carto'; ~*/tuiles/ 'tuiles'; ~*/librairies/ 'librairies'; } server { ..... location ~ /purge(/.*) { allow 127.0.0.1; deny all; set $cacheSelect carto; //carto is the default value if ($uri ~ /librairies/(.*)$ ) { set $cacheSelect librairies; } echo "Zone:$cacheSelect"; //Display librairies echo "Zone:$select_cache"; //Display librairies #proxy_cache_purge $select_cache $host$1$is_args$args; ==> Return 404 (file not found in cache) proxy_cache_purge $cacheSelect $host$1$is_args$args; ==> Return 404 (file not found in cache) } ... } ######################################################################## I feel that when I use "if" or "map" in my conf it does not work anymore Thank you for your help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269590,269590#msg-269590 From mark.mcdonnell at buzzfeed.com Thu Sep 15 13:55:47 2016 From: mark.mcdonnell at buzzfeed.com (Mark McDonnell) Date: Thu, 15 Sep 2016 14:55:47 +0100 Subject: we need an explicit max_fails flag to allow origin error through? Message-ID: Hello, We have an upstream that we know is serving a 500 error. We've noticed that NGINX is serving up a nginx specific "502 Bad Gateway" page instead of showing the actual Apache origin error that we'd expect to come through. To solve this we've added `max_fail: 0` onto the upstream server (there is only one server inside the upstream block) and now the original apache error page comes through. I'm not sure why that is for two reasons: 1. because max_fail should have no effect on the behaviour of something like proxy_intercept_errors (which is disabled/off by default, meaning any errors coming from an upstream should be proxied 'as is' to the client) 2. because max_fail should (according to nginx's docs) be a no-op... "If there is only a single server in a group, max_fails, fail_timeout and slow_start parameters are ignored, and such a server will never be considered unavailable" ?Does? any one have any further insights here? Thanks. M. -- Mark McDonnell | BuzzFeed | Senior Software Engineer | @integralist https://keybase.io/integralist 40 Argyll Street, 2nd Floor, London, W1F 7EB -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 15 17:36:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Sep 2016 20:36:18 +0300 Subject: we need an explicit max_fails flag to allow origin error through? In-Reply-To: References: Message-ID: <20160915173618.GA1527@mdounin.ru> Hello! On Thu, Sep 15, 2016 at 02:55:47PM +0100, Mark McDonnell wrote: > We have an upstream that we know is serving a 500 error. > > We've noticed that NGINX is serving up a nginx specific "502 Bad Gateway" > page instead of showing the actual Apache origin error that we'd expect to > come through. > > To solve this we've added `max_fail: 0` onto the upstream server (there is > only one server inside the upstream block) and now the original apache > error page comes through. > > I'm not sure why that is for two reasons: > > > 1. because max_fail should have no effect on the behaviour of something > like proxy_intercept_errors (which is disabled/off by default, meaning any > errors coming from an upstream should be proxied 'as is' to the client) When all servers in the upstream block are marked failed and/or nginx failed to get a valid answer from any of the working servers, nginx will just return 502 himself. And this is probably what happens in your case. > 2. because max_fail should (according to nginx's docs) be a no-op... "If > there is only a single server in a group, max_fails, fail_timeout and > slow_start parameters are ignored, and such a server will never be > considered unavailable" The "max_fails" parameter is expected to be a nop with only one server in the upstream block and assuming standard balancers. Note though, that: - non-standard balancers may behave differently; - backup servers are counted - if you have backup servers, nginx will honor max_fails; - if a name is used in the "server" directive, and the name resolves to multiple addresses, this means multiple servers from nginx point of view. The latter can be easily hit by using names like "localhost" in the configuration. Note well that just 500 error from an upstream server is not something that nginx will consider to be an error unless you've explicitly configured it using proxy_next_upstream, see http://nginx.org/r/proxy_next_upstream. The behaviour you describe suggests that your configuration has both "proxy_next_upstream http_500" and multiple servers. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Sep 15 21:12:16 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Thu, 15 Sep 2016 17:12:16 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <20160829143217.GF1855@mdounin.ru> References: <20160829143217.GF1855@mdounin.ru> Message-ID: Hello I would like to go back to this item: >> Yes, nginx will process requests one-by-one and won't pipeline >> requests to upstream. Can you please confirm, if no new request is sent to the upstream before the entire response is received for the ongoing request (ongoing request finished)? In other words, is possible that upstream module sends the next request to upstream server while there is still response bytes being received from upstream on a current request? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269602#msg-269602 From albert at plumewifi.com Fri Sep 16 03:19:04 2016 From: albert at plumewifi.com (Albert Zhang) Date: Thu, 15 Sep 2016 20:19:04 -0700 Subject: how to get common name from client cert in TLS connection instead of HTTPS Message-ID: <83A49303-2BC4-4499-8431-1CF9CFD9D023@plumewifi.com> how to get common name from client cert in TLS connection instead of HTTPS. I am using TLS not https and want to get common name from client cert using nginx plus ami on was, I am using AWS elb(ssl)+nginx client certificate ssl I know use $ssl_client_s_dn but how to get/compare the value here is my config: stream { upstream stream_backend { server 10.252.1.131:1983; server 10.252.1.131:2983; } server { listen 4443 ssl; proxy_pass stream_backend; proxy_ssl on; proxy_ssl_certificate /etc/ssl/certs/server.crt; proxy_ssl_certificate_key /etc/ssl/certs/server.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; # proxy_ssl_client_certificate /etc/ssl/certs/ca.pem; proxy_ssl_trusted_certificate /etc/ssl/certs/ca.pem; #proxy_ssl_session_reuse on; proxy_ssl_verify on; proxy_ssl_verify_depth 4; # proxy_ssl_verify_client optional; ssl_certificate /etc/ssl/certs/server.crt; ssl_certificate_key /etc/ssl/certs/server.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; ssl_handshake_timeout 30s; } } albert From reallfqq-nginx at yahoo.fr Fri Sep 16 07:23:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 16 Sep 2016 09:23:20 +0200 Subject: How to disable request pipelining on nginx upstream In-Reply-To: References: <20160829143217.GF1855@mdounin.ru> Message-ID: On Thu, Sep 15, 2016 at 11:12 PM, hkahlouche wrote: > Can you please confirm, if no new request is sent to the upstream before > the > entire response is received for the ongoing request (ongoing request > finished)? > In other words, is possible that upstream module sends the next request to > upstream server while there is still response bytes being received from > upstream on a current request? > ?AFAIK, 2 different requests are served separately, meaning you can have some requests sent when some other is being responded to. If you talk about the same request, then it is only sent to the next upstream server when there is an 'unsuccessful attempt' at communicating with the current upstream server. What defines this is told by the *_next_upstream directives of the pertinent modules (for example proxy_next_upstream ). That means that, by nature, there is no response coming back when the request is tried on the next server. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Sep 16 07:26:49 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 16 Sep 2016 09:26:49 +0200 Subject: how to get common name from client cert in TLS connection instead of HTTPS In-Reply-To: <83A49303-2BC4-4499-8431-1CF9CFD9D023@plumewifi.com> References: <83A49303-2BC4-4499-8431-1CF9CFD9D023@plumewifi.com> Message-ID: It seems the variable you are refering to belongs to the ngx_http_ssl_module, suitable for HTTPS, not in the ngx_stream_ssl_module, suitable for generic TLS. --- *B. R.* On Fri, Sep 16, 2016 at 5:19 AM, Albert Zhang wrote: > how to get common name from client cert in TLS connection instead of > HTTPS. I am using TLS not https and want to get common name from client > cert using nginx plus ami on was, > I am using AWS elb(ssl)+nginx client certificate ssl I know use > $ssl_client_s_dn but how to get/compare the value here is my config: > stream { > upstream stream_backend { > server 10.252.1.131:1983; > server 10.252.1.131:2983; > } > server { > listen 4443 ssl; > proxy_pass stream_backend; > proxy_ssl on; > proxy_ssl_certificate /etc/ssl/certs/server.crt; > proxy_ssl_certificate_key /etc/ssl/certs/server.key; > proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > proxy_ssl_ciphers HIGH:!aNULL:!MD5; > # proxy_ssl_client_certificate /etc/ssl/certs/ca.pem; > proxy_ssl_trusted_certificate /etc/ssl/certs/ca.pem; > #proxy_ssl_session_reuse on; > proxy_ssl_verify on; > proxy_ssl_verify_depth 4; > # proxy_ssl_verify_client optional; > ssl_certificate /etc/ssl/certs/server.crt; > ssl_certificate_key /etc/ssl/certs/server.key; > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers HIGH:!aNULL:!MD5; > ssl_session_cache shared:SSL:20m; > ssl_session_timeout 4h; > ssl_handshake_timeout 30s; > } > > } > > albert > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumeet.chhetri at gmail.com Fri Sep 16 09:48:43 2016 From: sumeet.chhetri at gmail.com (Sumeet Chhetri) Date: Fri, 16 Sep 2016 15:18:43 +0530 Subject: Nginx module development related query Message-ID: Hi All, I have been working on developing an nginx module for one of my c++ web framework and in the process have also read a lot of nginx blogs to understand and come up with an nginx module of my own for my framework. I had a small query related to serving static files within a module handler. So the query is, I have the below mentioned nginx configuration, location ~ ^/(.+?)(/.*)?$ { ffeadcpp_path /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/; } my framework web root is /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web It has multiple sites/context-roots within /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/default /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/markers /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/oauthApp /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/flexApp /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/te-benchmark I want all requests to be handled by my custom module, within the module I handle all roots and also separate dynamic requests from static requests and handle all dynamic requests using my framework logic. every site/folder/context has static files within the public folder, for eg, /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/default/public I have reached a point and written a module which handles dynamic requests and serves them successfully but now I want nginx to handle static file processing instead of doing it myself. Is there a way to just change the request uri with an actual file location in this case to, /home/vagrant/ffead-cpp/ffead-cpp-2.0-bin/web/default/public/index.html whenever a request for a static files arises according to the context root, In my module I separate the dynamic and static requests. Right now i'm returning NGX_DONE from my static file handling block, I have even tried NGX_DECLINED but to no avail. So in my module, I do, if(not static file) process it using my framework else ?? -- let nginx handle the static request, change the uri to add the ffeadcpp_path path prefix, along with the public suffix and the context root, signal nginx that this static request needs to be handled by nginx itself The source code for the module is located at https://github.com/ sumeetchhetri/ffead-cpp/blob/master/modules/nginx_mod_ ffeadcpp/ngx_http_ffeadcpp_module.cpp Any help would be greatly appreciated. Thanks, Sumeet Chhetri -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 16 12:41:11 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Fri, 16 Sep 2016 08:41:11 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: References: Message-ID: <0a55359841c6db26b134196dd0ee96bc.NginxMailingListEnglish@forum.nginx.org> ?>> AFAIK, 2 different requests are served separately, meaning you can have >> some requests sent when some other is being responded to. >> >> If you talk about the same request, then it is only sent to the next >> upstream server when there is an 'unsuccessful attempt' at communicating >> with the current upstream server. What defines this is told by the >> *_next_upstream directives of the pertinent modules (for example >> proxy_next_upstream >> >> ). >> That means that, by nature, there is no response coming back when the >> request is tried on the next server. >> I am talking about how two successive requests (from client side) are handled on a same already established keepalive socket towards upstream server: On that same socket and towards the same upstream server, is it possible that the nginx upstream module starts sending the subsequent request before the current one is completely done (by done I mean the complete Content-Length is transferred to the client side)? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269612#msg-269612 From amutsikiwa at gmail.com Fri Sep 16 14:27:41 2016 From: amutsikiwa at gmail.com (Admire Mutsikiwa) Date: Fri, 16 Sep 2016 16:27:41 +0200 Subject: http Port 3000 Message-ID: Hi, I am very new to NGINX. I am running Libki services which is accessible on port 3000 via http service e.g http://10.12.1.25:3000 . My service is being overwhelmed on the CPU side. The Libki system is used as a Internet Cafe system. I would like to use NGINX as a Load balancer by implementing three more libki services. For each client, there is need to maintain the connection that it would have made. My question is it possible to to load balamncing to non-standard ports such as 3000. Kind regards, Admire Mutsikiwa -------------- next part -------------- An HTML attachment was scrubbed... URL: From niklas at schuetrumpf.net Fri Sep 16 14:39:04 2016 From: niklas at schuetrumpf.net (=?utf-8?B?U2Now7x0cnVtcGYsIE5pa2xhcw==?=) Date: Fri, 16 Sep 2016 14:39:04 +0000 Subject: Disable Port removing on rewrite Message-ID: <324ec7abec96c21d51fb5d65129385a0@_> Hello, during the conversion of my web servers at home I ran into some problems. We are running more than one web servers at home and a few ports are open in the router for them. So the standard ports (80, 443) are blocked for other server in our network. One NGINX Server listens on port 80, and our router routes port 8081 from outside to port 80 locally. On any rewrite NGINX strips the port from the URL. This is the server because I get a connection to the server and then I got rewritten. Is there any config entry to disable the port remove on rewrite? I already tried ?port_in_redirect off?, without success. I already posted this question on SO and the NGINX forum, but I think in the mailing list i have more success. Link to StackOverflow Question: http://stackoverflow.com/questions/39519136/prevent-nginx-to-remove-the-port Link to NGINX forum: https://forum.nginx.org/read.php?11,269603 Currently i am using almost the default config, view SO for the config. Thank you in anticipation! - greetings from Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 16 15:12:16 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Fri, 16 Sep 2016 11:12:16 -0400 Subject: listen proxy_protocol and rewrite redirect scheme Message-ID: <06a01ebbbc3372a5806042fb26f5461d.NginxMailingListEnglish@forum.nginx.org> Hi, I have this setup: the browser request (https on 443) is received by sshttp which sends it to stunnel:1443 which proxy it to nginx:1080. When nginx receives the request it has $scheme = "http"; so, for any rewrite with "permanent" or "redirect" the Location header uses "http" while I really need "https" scheme. Is there any way for forcing nginx to change $scheme according to my will? or at least to generate the Location header with no scheme or with my desired scheme? Thank you nginx configuration: server { listen 127.0.0.1:1080 proxy_protocol; port_in_redirect off; server_name_in_redirect off; ... } stunnel configuration: [tls] accept = ************:1443 connect = 127.0.0.1:1080 protocol = proxy [ssh] sni = tls:tti.go.ro ... [www on any] sni = tls:* connect = 127.0.0.1:1080 protocol = proxy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269623#msg-269623 From francis at daoine.org Fri Sep 16 17:34:14 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Sep 2016 18:34:14 +0100 Subject: Disable Port removing on rewrite In-Reply-To: <324ec7abec96c21d51fb5d65129385a0@_> References: <324ec7abec96c21d51fb5d65129385a0@_> Message-ID: <20160916173414.GI11677@daoine.org> On Fri, Sep 16, 2016 at 02:39:04PM +0000, Sch?trumpf, Niklas wrote: Hi there, > One NGINX Server listens on port 80, and our router routes port 8081 from outside to port 80 locally. > On any rewrite NGINX strips the port from the URL. This is the server because I get a connection to the server and then I got rewritten. > Is there any config entry to disable the port remove on rewrite? I already tried ?port_in_redirect off?, without success. I think that stock nginx does not have a way for this to happen automatically. I see three possibilities, with different downsides. * You could run nginx on port 8081. Then normal things would Just Work; but your internal requests would need to be sent to port 8081 instead of the default. * Since you already rewrite all requests that do not end in /; if you can confirm that your clients send the name:port in the Host header that they send, then you could change the rewrite destination to explicitly include that: rewrite ^(.*[^/])$ http://$http_host$1/ permanent; This would break any clients that do not send the Host: header that you expect -- possibly that matters; possibly it doesn't. It is not clear to me how your current config can work -- with the rewrite, no request will match your ".php$" location. If that is to change, and you have an enumerable list of "directories" where you want the add-a-slash redirect, you could create a bunch of explicit "location =" blocks that do "return 301" with $http_host. * I think that this one does not apply here, but: if there was exactly one name:port combination that you wanted included in all nginx-generated redirections, then you could configure nginx like: port_in_redirect off; server_name_in_redirect on; server_name my.domain.tld:8081; In your case, this does not fit the requirements; if you were to do this, you would be better off having nginx on port 8081 directly, I think. * Because that third option does not apply, the other third option is to write the code, or encourage someone else to write the code, that will allow nginx to do what you want. That is probably not a quick process. All the best, f -- Francis Daly francis at daoine.org From niklas at schuetrumpf.net Fri Sep 16 18:05:34 2016 From: niklas at schuetrumpf.net (=?utf-8?B?U2Now7x0cnVtcGYsIE5pa2xhcw==?=) Date: Fri, 16 Sep 2016 18:05:34 +0000 Subject: Disable Port removing on rewrite In-Reply-To: <20160916173414.GI11677@daoine.org> References: <20160916173414.GI11677@daoine.org> <324ec7abec96c21d51fb5d65129385a0@_> Message-ID: 16. September 2016 19:34, "Francis Daly" schrieb: > * Since you already rewrite all requests that do not end in /; if you can > confirm that your clients send the name:port in the Host header that > they send, then you could change the rewrite destination to explicitly > include that: > > rewrite ^(.*[^/])$ http://$http_host$1/ permanent; > > This would break any clients that do not send the Host: header that you > expect -- possibly that matters; possibly it doesn't. It is not clear > to me how your current config can work -- with the rewrite, no request > will match your ".php$" location. If that is to change, and you have an > enumerable list of "directories" where you want the add-a-slash redirect, > you could create a bunch of explicit "location =" blocks that do "return > 301" with $http_host. Thanks alot! I used your rewrite tipp! I modified the rule and added a if statement: if (-d $request_filename) { rewrite [^/]$ $scheme://$http_host$uri/ permanent; } I think nowdays almost every browser sends the HTTP header (i hope so) thanks again! ---------- Niklas From francis at daoine.org Fri Sep 16 18:46:56 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Sep 2016 19:46:56 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <06a01ebbbc3372a5806042fb26f5461d.NginxMailingListEnglish@forum.nginx.org> References: <06a01ebbbc3372a5806042fb26f5461d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160916184656.GJ11677@daoine.org> On Fri, Sep 16, 2016 at 11:12:16AM -0400, adrhc wrote: Hi there, > the browser request (https on 443) is received by sshttp which sends it to > stunnel:1443 which proxy it to nginx:1080. > When nginx receives the request it has $scheme = "http"; so, for any rewrite > with "permanent" or "redirect" the Location header uses "http" while I > really need "https" scheme. > > Is there any way for forcing nginx to change $scheme according to my will? > or at least to generate the Location header with no scheme or with my > desired scheme? I think that stock nginx does not have a way to do this. For any "rewrite" that you create, you can explicitly include "https://" at the start -- but that will not help internally-generated things like the trailing-slash redirect for directories. If you want those, and your nginx is not doing its own ssl, I think you would need a code change to get https: in the Location headers. Not tested, but I suspect that removing four lines from src/http/ngx_http_header_filter_module.c so that "*b->last++ ='s';" is always called, might be enough for your newly-compiled nginx to always redirect to https. A proper fix would presumably involve a more general config option so that it is selectable. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Sep 16 18:50:55 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Sep 2016 19:50:55 +0100 Subject: Disable Port removing on rewrite In-Reply-To: References: <20160916173414.GI11677@daoine.org> <324ec7abec96c21d51fb5d65129385a0@_> Message-ID: <20160916185055.GK11677@daoine.org> On Fri, Sep 16, 2016 at 06:05:34PM +0000, Sch?trumpf, Niklas wrote: > 16. September 2016 19:34, "Francis Daly" schrieb: Hi there, > > rewrite ^(.*[^/])$ http://$http_host$1/ permanent; > > > > This would break any clients that do not send the Host: header that you > > expect -- possibly that matters; possibly it doesn't. > Thanks alot! > I used your rewrite tipp! > I modified the rule and added a if statement: > if (-d $request_filename) { > rewrite [^/]$ $scheme://$http_host$uri/ permanent; > } Yes, that looks like it should work; and it will do mostly the same as the internal trailing-slash redirect, which is good. > I think nowdays almost every browser sends the HTTP header (i hope so) They will pretty much all send a Host: header or they won't work very well on the public internet; I'm not sure if they all include the :port within the Host: header. But it does not matter what "all" do; only what the ones that you care about do. Hopefully that is something that you can test adequately :-) Good that you have a solution. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Sep 16 22:29:31 2016 From: nginx-forum at forum.nginx.org (vlad0) Date: Fri, 16 Sep 2016 18:29:31 -0400 Subject: Cache always in "UPDATING" In-Reply-To: References: Message-ID: Disabling "aio threads" totally fixed my problem, it had appeared after i had enabled it. I wasn't able to reproduce myself so i couldn't get a debug log when it triggers.. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269545,269632#msg-269632 From nginx-forum at forum.nginx.org Sat Sep 17 06:36:31 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 02:36:31 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160916184656.GJ11677@daoine.org> References: <20160916184656.GJ11677@daoine.org> Message-ID: <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> yep, that's exactly my problem: "... but that will not help internally-generated things like the trailing-slash redirect for directories." I'll check your solution though I'm very open for other too :D PS: I do compile my own custom nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269636#msg-269636 From nginx-forum at forum.nginx.org Sat Sep 17 07:11:20 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 03:11:20 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> References: <20160916184656.GJ11677@daoine.org> <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> Message-ID: Oh, and I only want this change to apply to servers with "listen ... proxy_protocol" but not otherwise ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269640#msg-269640 From francis at daoine.org Sat Sep 17 12:10:31 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 17 Sep 2016 13:10:31 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> References: <20160916184656.GJ11677@daoine.org> <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160917121031.GL11677@daoine.org> On Sat, Sep 17, 2016 at 02:36:31AM -0400, adrhc wrote: Hi there, > yep, that's exactly my problem: > "... but that will not help internally-generated things like the > trailing-slash redirect for directories." > > I'll check your solution though I'm very open for other too :D If you care only about the internally-generated trailing-slash redirects, then you could try to add something like (lifted from a parallel thread) if (-d $request_filename) { rewrite [^/]$ https://$host$uri/ permanent; } into places where the trailing-slash redirect might happen. If there are any other http-not-https redirections that you see, possibly they could be investigated as they arise. At least, that would avoid you patching the source. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Sep 17 12:22:28 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 17 Sep 2016 13:22:28 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: References: <20160916184656.GJ11677@daoine.org> <3b64067c8d537f4d49901660503d4b14.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160917122228.GM11677@daoine.org> On Sat, Sep 17, 2016 at 03:11:20AM -0400, adrhc wrote: Hi there, > Oh, and I only want this change to apply to servers with "listen ... > proxy_protocol" but not otherwise ... That makes the initial code-change suggestion (where *all* adjusted Location: headers would be https) insufficient. If you decide that you want to provide the code to allow this feature, then it might still be a useful first step, to learn whether that one change is enough to have the desired output. After that, you can worry about how best you should set your configuration to enable it selectively. Note that http://nginx.org/r/listen suggests that proxy_protocol is a parameter to the listen directive, which suggests that you could have both listen 8000; listen 8001 proxy_protocol; in the same server{} block; so whatever configuration you choose may need to distinguish between "do https redirect here", and "do https redirect here only if proxy_protocol was used". (I have not used proxy_protocol, just read those docs.) That is not impossible, but is another wrinkle that would have to be designed correctly for if the patch were to be accepted into stock nginx, I suspect. Of course, if you are carrying your own patch, you don't have to care whether it is acceptable to anyone else. So -- if you know that your server{}s will either have proxy_protocol on all listen:s or on none, then you could patch things so that the https redirection is just configured per-server. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Sep 17 13:25:50 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 09:25:50 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160917121031.GL11677@daoine.org> References: <20160917121031.GL11677@daoine.org> Message-ID: <9ab466cb28469bf1f9e36633df8037a8.NginxMailingListEnglish@forum.nginx.org> Hi, thank you for the hints. Starting from you suggestion I modified src/http/ngx_http_header_filter_module.c like this: #if (NGX_HTTP_SSL) if (c->ssl || port == 443) { *b->last++ ='s'; } #endif and it works! But works hand in hand with this nginx configuration (in order to keep original request's port: 443 for me): port_in_redirect off; and it's important for the initial request to come with 443 port. For me the flow is: request:443 go to sshttp:444 then stunnel:1443 and in the end to nginx (listen 127.0.0.1:1080 proxy_protocol). This affects every server where the port is evaluated to 443 which is not perfect (in odd but possible situation 443 could be a non-ssl port or someone would want this for simply other ports too). A perfect solution I think would be one where nginx would allow me to overwrite somehow the "c->ssl" above with a nginx-custom-variable, let's say $https_override (on = force c->ssl to evaluate to true; I guess "c->ssl" takes it's value from $https that's why $https_override ...). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269643#msg-269643 From nginx-forum at forum.nginx.org Sat Sep 17 15:24:58 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 11:24:58 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <9ab466cb28469bf1f9e36633df8037a8.NginxMailingListEnglish@forum.nginx.org> References: <20160917121031.GL11677@daoine.org> <9ab466cb28469bf1f9e36633df8037a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, I'm sorry but I mistakenly claimed to work the patch: #if (NGX_HTTP_SSL) if (c->ssl || port == 443) { *b->last++ ='s'; } #endif In order to work nginx needs this config: server { listen 127.0.0.1:443 proxy_protocol; port_in_redirect on; and stunnel: [tls to http] sni = tls:* connect = 127.0.0.1:443 protocol = proxy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269644#msg-269644 From nginx-forum at forum.nginx.org Sat Sep 17 16:05:05 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 12:05:05 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: References: <20160917121031.GL11677@daoine.org> <9ab466cb28469bf1f9e36633df8037a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Well, it works partially; sometimes (scarce cases) the redirect still uses http ... this happens even with: #if (NGX_HTTP_SSL) // if (c->ssl || port != 80) { *b->last++ ='s'; // } #endif Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269645#msg-269645 From nginx-forum at forum.nginx.org Sat Sep 17 17:51:19 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 13:51:19 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160917122228.GM11677@daoine.org> References: <20160917122228.GM11677@daoine.org> Message-ID: Hi, I'm sorry for the babble above but there are so many point of failure and the setup is so complex. Last problem was php (e.g. phpMyAdmin). Anyway now really works this way: src/http/ngx_http_header_filter_module.c: #if (NGX_HTTP_SSL) if (c->ssl || port == 443) { *b->last++ ='s'; } #endif nginx.conf: server { listen 127.0.0.1:443 proxy_protocol; port_in_redirect on; stunnel configuration: [tls] accept = 192.168.1.31:1443 connect = 127.0.0.1:1080 protocol = proxy [ssh] sni = tls:ssh.go.ro ... [tls to any http] sni = tls:* connect = 127.0.0.1:443 protocol = proxy fastcgi_params: fastcgi_param HTTPS "on"; fastcgi_param SERVER_PORT "443"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269647#msg-269647 From nginx-forum at forum.nginx.org Sat Sep 17 19:41:34 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Sat, 17 Sep 2016 15:41:34 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160917122228.GM11677@daoine.org> References: <20160917122228.GM11677@daoine.org> Message-ID: <19618bafae6efd2d959a9e2048495522.NginxMailingListEnglish@forum.nginx.org> I'm sorry for the babble above but the source of errors are too many. The previous post the problem was php (e.g. phpMyAdmin). The final working setup: src/http/ngx_http_header_filter_module.c: #if (NGX_HTTP_SSL) if (c->ssl || port == 443) { *b->last++ ='s'; } #endif In order to work nginx needs this config: server { listen 127.0.0.1:443 proxy_protocol; port_in_redirect on; stunnel.conf: [tls to http] sni = tls:* connect = 127.0.0.1:443 protocol = proxy fastcgi_params: # http://tyy.host-ed.me/pluxml/article4/port-443-for-https-ssh-and-ssh-over-ssl-and-more fastcgi_param HTTPS "on"; fastcgi_param SERVER_PORT "443"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269646#msg-269646 From vbart at nginx.com Sun Sep 18 17:18:52 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 18 Sep 2016 20:18:52 +0300 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <0a55359841c6db26b134196dd0ee96bc.NginxMailingListEnglish@forum.nginx.org> References: <0a55359841c6db26b134196dd0ee96bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1811807.ET9GBBa5to@vbart-laptop> On Friday 16 September 2016 08:41:11 hkahlouche wrote: > ?>> AFAIK, 2 different requests are served separately, meaning you can have > >> some requests sent when some other is being responded to. > >> > >> If you talk about the same request, then it is only sent to the next > >> upstream server when there is an 'unsuccessful attempt' at communicating > >> with the current upstream server. What defines this is told by the > >> *_next_upstream directives of the pertinent modules (for example > >> proxy_next_upstream > >> > > >> ). > >> That means that, by nature, there is no response coming back when the > >> request is tried on the next server. > >> > I am talking about how two successive requests (from client side) are > handled on a same already established keepalive socket towards upstream > server: On that same socket and towards the same upstream server, is it > possible that the nginx upstream module starts sending the subsequent > request before the current one is completely done (by done I mean the > complete Content-Length is transferred to the client side)? > No, it's not possible. As already were said twice, nginx dosn't support pipelining on the upstream side. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Sep 19 08:20:47 2016 From: nginx-forum at forum.nginx.org (dujiaolong) Date: Mon, 19 Sep 2016 04:20:47 -0400 Subject: Nike Air Max Thea UK Sale Message-ID: <93d837dd52e207b259e44448f46f78b0.NginxMailingListEnglish@forum.nginx.org> [url=http://www.toptrainershoes.co.uk/nike-c-71/]Nike Air Max 1 Mens UK[/url] Air max 90 pas cher Shoes have been quite a craze for some prolonged years because the way they look and definitely the comfort they may have provide to the wearers. Comfy foot wears are indeed significant and Nike has always been recognized to provide some of the most amazing footwear, be it any kind. Known for how a brand has been able to enthrall people and managed to store them all mesmerized by the sneakers has definitely been a achievement of its kind. Nike continues to be known to bring out shoes this not only are comfortable are usually genuinely trendy. Nike Air Max Sneakers were initially introduced by means of Nike in the year 1985. This has been definitely long time back. [url=http://www.toptrainershoes.co.uk/nike-c-71/]Nike Air Max 1 Online Sale[/url] But Nike Air Max Shoes were once more introduced in the market worldwide back in 1998 and this time this kind of became a genuine hit. It truly is definitely a major criterion for just about any shoe lover to be able to have the perfect pair of shoes that but not only look good or attractive but in addition makes the wearer very secure. To don sports shoes, it is necessary that comfort is given adequate importance and in this case, Air max pas cher Shoes have undoubtedly recently been invincible. The vibrant colorings that are coupled with innovative approaches have definitely managed to ensure it is a huge hit among the trainer lovers. [url=http://www.toptrainershoes.co.uk/nike-c-71/]Cheap Nike Air Max 90 Sale[/url] We all love to wear great shoes that look good, spunky and definitely that make us feel safe. Known for being able to give comfort and ease to the wearers and famous for the sporty reasons for which often these shoes are used, Wholesale Air max 90 pas cher have been able to live up to typically the expectations of the people. Nike nevertheless proves to be one company that has had n quantity of followers and with Nike Air Max Shoes and boots, the trend still continues. Why wait? If you still have not became yourself a pair, then never wait and go ahead. [url=http://www.toptrainershoes.co.uk/nike-c-71/]Nike Air Max 90 Hyperfuse UK[/url] The issue of Zoon and inexpensive nike air max is diferent. Greatest extent, especially Max 360, makes use of high pressure and capacity atmosphere cushion. Due to the large potential is too thick, too big in order that have to construct columns in order to shock absorption. It is more suitable along with obvious for big weight. Although Zoom air cushion is one of thin and tough, provides definite promoting effect in the operation that you start your feet readily so that make your reaction quicker. However , the effect of cushioning is very general, even the thicker open-window cage zoom is just not very optimistic. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269655,269655#msg-269655 From black.fledermaus at arcor.de Mon Sep 19 10:41:46 2016 From: black.fledermaus at arcor.de (basti) Date: Mon, 19 Sep 2016 12:41:46 +0200 Subject: Nginx Perl / Cgi Permission Problem Message-ID: Hello, I have a perl/ gci script that creates a dir and within that an subdir with permission 0750. The owner of the dirs are www-data. nginx can't delete the dirs because this is run as user nginx. is there a way to set the user for perl/cgi to nginx? only for this location? best regards From nginx-forum at forum.nginx.org Mon Sep 19 13:08:57 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Mon, 19 Sep 2016 09:08:57 -0400 Subject: Transmission remote GUI proxy_protocol broken header Message-ID: <07aaecbc025e8e39f94c4a5ec9e97702.NginxMailingListEnglish@forum.nginx.org> Hi, while having this setup: nginx-1.11.3 Linux utg353l 4.4.0-36-generic #55-Ubuntu SMP Thu Aug 11 18:01:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Transmission remote GUI:443 -> sshttp:443 -> stunnel:1443 -> nginx:127.0.0.1:443 (no ssl, with listen ... proxy_protocol, port_in_redirect on) nginx:127.0.0.1:443 -> here I'm getting in fact plain http but I have an nginx patch forcing Location header to use https when using 443 port so I need 443. I'm getting the error: 2016/09/19 16:02:51 [error] 12665#0: *7 broken header: ">:E_??Zp? ???(?Z??'??0?,?(?$?? ????kjih9876?????2?.?*?&???=5??/?+?'?#?? ????g@?>3210????EDCB?1?-?)?%??? sshttp:443 -> nginx:1443 (with ssl, port_in_redirect off) Any idea? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269662,269662#msg-269662 From nginx-forum at forum.nginx.org Mon Sep 19 13:53:38 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 19 Sep 2016 09:53:38 -0400 Subject: typical value(s) for stream/limit_conn Message-ID: Plenty of guidelines for http limit_conn but hardly any for stream, what would be a typical value in which cases? Has anyone done some log/connection analysis to determine what would be typical use? Atm. I'd say '5', but this is more a feeling then science. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269663,269663#msg-269663 From al-nginx at none.at Mon Sep 19 20:46:45 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 19 Sep 2016 22:46:45 +0200 Subject: Nginx Perl / Cgi Permission Problem In-Reply-To: References: Message-ID: <7f427b6289d68795de0fab915b299d28@none.at> Hi. Am 19-09-2016 12:41, schrieb basti: > Hello, > I have a perl/ gci script that creates a dir and within that an subdir > with permission 0750. The owner of the dirs are www-data. > > nginx can't delete the dirs because this is run as user nginx. > is there a way to set the user for perl/cgi to nginx? only for this > location? In short: not in nginx. nginx normally does not execute cgi-scripts. It would be helpful when post the output of nginx -V and how and which perl script is executed. Best regards Aleks From steve at greengecko.co.nz Tue Sep 20 05:03:59 2016 From: steve at greengecko.co.nz (steve) Date: Tue, 20 Sep 2016 17:03:59 +1200 Subject: rewrites... Message-ID: <57E0C33F.7080109@greengecko.co.nz> Hi folks, I am running php as fastcgi, replacing a working apache/mod_php setup. I'm pretty close, but am not quite there. The last bit I've got to get working is... RewriteRule ^(.*)$ index.php [E=query:$1,L] Which I understand to mean pass the request in a variable called query. My (not working ) attempt is: location / { try_files $uri $uri/ @rewriteapp; } location @rewriteapp { fastcgi_param query $request_uri; rewrite ^(.*)$ /index.php last; } # Pass the php scripts to fastcgi server specified in upstream declaration. location ~ \.php(/|$) { # Unmodified fastcgi_params from nginx distribution. include fastcgi_params; # Necessary for php. fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; try_files $uri $uri/ /index.php$is_args$args; fastcgi_pass backend; } Can anyone point out where I'm going wrong please? Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From steve at greengecko.co.nz Tue Sep 20 05:18:26 2016 From: steve at greengecko.co.nz (steve) Date: Tue, 20 Sep 2016 17:18:26 +1200 Subject: Nginx Perl / Cgi Permission Problem In-Reply-To: References: Message-ID: <57E0C6A2.5010501@greengecko.co.nz> Hi, On 09/19/2016 10:41 PM, basti wrote: > Hello, > I have a perl/ gci script that creates a dir and within that an subdir > with permission 0750. The owner of the dirs are www-data. > > nginx can't delete the dirs because this is run as user nginx. > is there a way to set the user for perl/cgi to nginx? only for this > location? > > best regards > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Not as such. However, as it sounds like you're running on linux, you can fool around a bit and sort of do it backwards... Set the perms on the parent directory to 2770, and the group ownership to nginx. What this does is to enforce the old BSD file ownership method, in that every file - be it flat or a directory - created in this directory will have the group ownership of nginx ( it's parent ), and that this functionality will be transferred to any subdirectories that are created. That's half of the battle won. The second thing that needs to be done it to add group write permissions to these files. Basically, the UMASK of the environment that the perl script is running needs to be set to 0002 from 0022. This does mean that all files generated by the script will have group write permissions, so it's not perfect, but it's a start. Hopefully group write permissions to www-data aren't too bad. hth, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at forum.nginx.org Tue Sep 20 10:48:41 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Tue, 20 Sep 2016 06:48:41 -0400 Subject: Start nginx worker process with same user as master process In-Reply-To: <20160817181010.GO12280@daoine.org> References: <20160817181010.GO12280@daoine.org> Message-ID: Hi Francis, Thanks for your update. When nginx is installed (checking with -V option), I see that the user specified is "nginx" user. However my master and worker process are run as a different user (non root user). In this case I see that many of the directories in nginx are owned by nginx user. (probably bcos it was installed as nginx?). Is there a way to mention that I need this new user. nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/nginx/conf/nginx.conf:6 Wth this , it looks like I cant change the user directive in nginx.conf file since it does not have any effect. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269006,269677#msg-269677 From nginx-forum at forum.nginx.org Tue Sep 20 15:40:55 2016 From: nginx-forum at forum.nginx.org (trivender) Date: Tue, 20 Sep 2016 11:40:55 -0400 Subject: proxy_pass Cannot GET In-Reply-To: References: Message-ID: <11e9b5af22cb6ce7e4385dbc7a7120cb.NginxMailingListEnglish@forum.nginx.org> Please someone help me with the same issue.. http://stackoverflow.com/questions/39574238/mongo-express-get-request-issue-with-nginx-proxy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,244100,269687#msg-269687 From balaji.viswanathan at gmail.com Tue Sep 20 16:04:35 2016 From: balaji.viswanathan at gmail.com (Balaji Viswanathan) Date: Tue, 20 Sep 2016 21:34:35 +0530 Subject: Re-balancing Upstreams in TCP LoadBalancer Message-ID: Hello Nginx Users, I am running nginx as a TCP load balancer (streams module). I am trying to find a way to redistribute client TCP connections to upstream servers, specifically, rebalance the load on the upstream servers (on some event) w/o having to restart nignx. Clients use persistent TCP connections. The scenario is as follows Application protocol - There is an application level ack for unit-of-work, so unclean connection termination is ok. Persistent TCP connections - Client opens persistent TCP connections, focus is on near real-time delivery of small amount of data. Connection stays open for days. Maintenance/Downtime - When one of the upstream servers is shutdown for maintenance, all it's client connections break, clients reconnect and switch to one of the remaining active upstream servers. When the upstream is brought back up post maintenance, the load isnt redistributed. ie., existing connections (since they are persistent) remain with other servers. Only new connections can go to the new server. This is more pronounced in 2 upstream server setup...where all connections switch between servers....kind of like thundering herd problem. I would like to have the ability to terminate some/all client connections explicitly and have them reconnect back. I understand that with nginx maintaining 2 connections for every client, there might not be a 'clean' time to close the connection, but since there is an application ack on top...an unclean termination is acceptable. I currently have to restart nginx to rebalance the upstreams which effectively is the same. Restarting all upstream servers and synchronizing their startup is non-trivial. So is signalling all clients(1000s) to close and reconnect. I can achieve this partially by disabling keepalive on nginx listen port (so_keepalive=off) and then having least_conn as the load-balancer method on my upstream. However, this is not desirable in steady state (see persistent TCP connections above), and even though connections get evenly distributed...the load might no be...as idle and busy clients will end up with different upstreams. Nginx plus features like, "On the fly configuration" upstream_conf allows one to change the upstream configuration, but it doesnt affect existing connections, even if a server is marked as down. "Draining of sessions" is only applicable to http requests and not to TCP connections. Did anyone else face such a problem? How did you resolve it? Any pointers will be much appreciated. thanks, balaji -- Balaji Viswanathan Bangalore India -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Sep 20 19:37:04 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2016 20:37:04 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <19618bafae6efd2d959a9e2048495522.NginxMailingListEnglish@forum.nginx.org> References: <20160917122228.GM11677@daoine.org> <19618bafae6efd2d959a9e2048495522.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160920193704.GO11677@daoine.org> On Sat, Sep 17, 2016 at 03:41:34PM -0400, adrhc wrote: Hi there, > The final working setup: > > src/http/ngx_http_header_filter_module.c: > #if (NGX_HTTP_SSL) > if (c->ssl || port == 443) { > *b->last++ ='s'; > } > #endif This will work in your circumstances -- you compile with ssl (although you don't appear to use it); and your proxy_protocol means that "port" is presented as 443. So you should be able to carry this patch for as long as you need it. It won't work in general, because of the various circumstances and lack of configurability. But that's not a problem here :-) > In order to work nginx needs this config: > server { > listen 127.0.0.1:443 proxy_protocol; > port_in_redirect on; I'm not sure why the port_in_redirect in redirect should be needed; but you've tested it and it works as-is, so can be left that way. > fastcgi_params: > fastcgi_param HTTPS "on"; > fastcgi_param SERVER_PORT "443"; "HTTPS" tells php to ensure that links are to the https url; I would have thought that SERVER_PORT would have been handled by the proxy_protocol thing. But again: this works for you, and that is what matters here. Good that you found a solution, and thanks for sharing it so that those who search the archive have something to refer to. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 20 19:42:52 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 20 Sep 2016 15:42:52 -0400 Subject: typical value(s) for stream/limit_conn In-Reply-To: References: Message-ID: <2888b02eadcfa09f7d647c8b76271220.NginxMailingListEnglish@forum.nginx.org> 5 wasn't really enough, 12 seems a better value. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269663,269701#msg-269701 From francis at daoine.org Tue Sep 20 19:49:56 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2016 20:49:56 +0100 Subject: Transmission remote GUI proxy_protocol broken header In-Reply-To: <07aaecbc025e8e39f94c4a5ec9e97702.NginxMailingListEnglish@forum.nginx.org> References: <07aaecbc025e8e39f94c4a5ec9e97702.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160920194956.GP11677@daoine.org> On Mon, Sep 19, 2016 at 09:08:57AM -0400, adrhc wrote: Hi there, > nginx-1.11.3 > Transmission remote GUI:443 -> sshttp:443 -> stunnel:1443 -> > nginx:127.0.0.1:443 (no ssl, with listen ... proxy_protocol, > port_in_redirect on) There are a lot of potential moving parts there. > nginx:127.0.0.1:443 -> here I'm getting in fact plain http but I have an > nginx patch forcing Location header to use https when using 443 port so I > need 443. I *suspect* that that change will not affect this problem, but it is good to be sure. When you have a simple reproducible test case, could you swap in a stock nginx version to confirm that the problem remains there? But first... > I'm getting the error: > > 2016/09/19 16:02:51 [error] 12665#0: *7 broken header: ">:E_??Zp? > ???(?Z??'??0?,?(?$?? > ????kjih9876?????2?.?*?&???=5??/?+?'?#?? ????g@?>3210????EDCB?1?-?)?%??? ? > ?? > ?g > 127.0.0.1 > " while reading PROXY protocol, client: > 127.0.0.1, server: 127.0.0.1:443 If I have understood your architecture correctly, your nginx is http-only (no ssl). And that usually means "no binary stuff", unless you know you are asking for compression or something like that. So: where does this log come from? Can you see: what is the one request that causes this to happen? Can you remove as much as possible, and just make one https request to stunnel and see if the same problem happens? If so, you have eliminated Transmission and sshttp from the problem space. Make a request for something simple like /file.txt that you know will be handled cleanly by your nginx. Simplify it as much as possible, and there will be a much better chance that someone else will build something to reproduce the problem. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 20 19:58:50 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2016 20:58:50 +0100 Subject: Start nginx worker process with same user as master process In-Reply-To: References: <20160817181010.GO12280@daoine.org> Message-ID: <20160920195850.GQ11677@daoine.org> On Tue, Sep 20, 2016 at 06:48:41AM -0400, Sushma wrote: Hi there, There are a few different things that I think you may be conflating here. > When nginx is installed (checking with -V option), I see that the user > specified is "nginx" user. By that, I think you mean that the compile-time default for the "user" directive is "nginx"? So if you do not have an explicit "user" directive, "user nginx" will be assumed in the nginx.conf. > However my master and worker process are run as a different user (non root > user). That is the way you want it to be, yes? If you start nginx (master) as a non-root user, it will not change user before starting the worker processes. > In this case I see that many of the directories in nginx are owned by nginx > user. (probably bcos it was installed as nginx?). The file ownership is independent of the user running the process. The only thing that matters is that the user running the process is able to read and write the files that it needs to read and write. If you need to change things there, change them outside of the nginx process. > Is there a way to mention that I need this new user. How do you start the nginx process? Whatever that method is, do it as the user that you want to run everything as. > nginx: [warn] the "user" directive makes sense only if the > master process runs with super-user privileges, ignored in > /usr/local/nginx/conf/nginx.conf:6 > Wth this , it looks like I cant change the user directive in nginx.conf file > since it does not have any effect. Correct; a normal user is not able to switch to become a new user. If you want to run nginx master-and-worker as user abc, become user abc and then run nginx. If nginx running as user abc is not able to read or write files or directories that you want it to, change the ownership of or the permissions on those files-or-directories, ideally before you run nginx. Is there a specific thing that you want to do, that you are unable to? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 20 20:27:40 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2016 21:27:40 +0100 Subject: typical value(s) for stream/limit_conn In-Reply-To: References: Message-ID: <20160920202740.GR11677@daoine.org> On Mon, Sep 19, 2016 at 09:53:38AM -0400, itpp2012 wrote: Hi there, > Plenty of guidelines for http limit_conn but hardly any for stream, what > would be a typical value in which cases? > Has anyone done some log/connection analysis to determine what would be > typical use? "stream" is "arbitrary tcp connections". There is no "typical", I think. If you are using "stream" to handle things that are typically one long-lasting tcp connection, such as ssh-for-terminal, then you'll probably be ok with a small number (unless you have multiple clients appearing as the same "key" (often IP address). If you are using "stream" to handle things that are typically many overlapping short-lasting tcp connections (simple cgi mysql clients, perhaps), then you'll probably want a bigger number. But a number that is right in your environment for port 389 may be completely wrong for port 37, for example. If you are trying to limit based on avoid-abuse, you will need to assess what is "normal" in your case, and define something else as "too much". If you are trying to limit based on avoid-overload, you will need to assess what your backends can handle, and set the limit near that. (In the latter case, you would presumably not limit based on $binary_remote_addr, but on something static to limit the total number of connections, I guess.) > Atm. I'd say '5', but this is more a feeling then science. 5 could work. 5 per $remote_port might be too many. This is very much "it depends". Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 20 20:29:14 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Sep 2016 21:29:14 +0100 Subject: typical value(s) for stream/limit_conn In-Reply-To: <20160920202740.GR11677@daoine.org> References: <20160920202740.GR11677@daoine.org> Message-ID: <20160920202914.GS11677@daoine.org> On Tue, Sep 20, 2016 at 09:27:40PM +0100, Francis Daly wrote: > On Mon, Sep 19, 2016 at 09:53:38AM -0400, itpp2012 wrote: Hi there, > "stream" is "arbitrary tcp connections". There is no "typical", I think. "stream" is also "arbitrary udp datagrams" too, of course. Which probably has its own set of wrinkles. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 20 20:51:01 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 20 Sep 2016 16:51:01 -0400 Subject: typical value(s) for stream/limit_conn In-Reply-To: <20160920202914.GS11677@daoine.org> References: <20160920202914.GS11677@daoine.org> Message-ID: <90ab981d1606eace5d18c102a57f5726.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Tue, Sep 20, 2016 at 09:27:40PM +0100, Francis Daly wrote: > > > "stream" is "arbitrary tcp connections". There is no "typical", I > think. > > "stream" is also "arbitrary udp datagrams" too, of course. Which > probably > has its own set of wrinkles. I agree on both, a typical smtp tcp proxy needs 8, rdp would be fine with 3. The point is to get some guidance which value would be 'typical' for use of a tcp proxy for xxx to complement the tcp documentation (when it comes to using a tcp proxy as ssl termination point). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269663,269707#msg-269707 From nginx-forum at forum.nginx.org Wed Sep 21 05:51:18 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Wed, 21 Sep 2016 01:51:18 -0400 Subject: Start nginx worker process with same user as master process In-Reply-To: <20160920195850.GQ11677@daoine.org> References: <20160920195850.GQ11677@daoine.org> Message-ID: <6defa4ec7f35acbc7bf1642ba3cf89e8.NginxMailingListEnglish@forum.nginx.org> Thanks for the details. I have explicitly changed permissions for directories as required. But the problem I am facing here is nginx reload fails due to permission denied for proxy_temp folder. I had explicitly changed permissions for this folder so that it could be accesssed by user abc (user with which nginx is running). However, when buffering happens, I see that the folder permissions are changed to be owned by nginx user. Hence my reload fails due to permissions issues. Is there something I am missing here? Does nginx remember the user with which it was installed and is that why these folder permissions are changing? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269006,269711#msg-269711 From nginx-forum at forum.nginx.org Wed Sep 21 07:25:04 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Wed, 21 Sep 2016 03:25:04 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160920193704.GO11677@daoine.org> References: <20160920193704.GO11677@daoine.org> Message-ID: <706f2a27e2b9116a62c41c26d2f0e3a8.NginxMailingListEnglish@forum.nginx.org> Indeed the solution might look strange but it works (test it with e.g. https or http ://adrhc.go.ro/ffp). Would be nicer if would exists a variable like let's say $override_ssl which to force nginx consider it run a ssl request with all the consequences. Again I thank you for your support. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269714#msg-269714 From nginx-forum at forum.nginx.org Wed Sep 21 08:05:22 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Wed, 21 Sep 2016 04:05:22 -0400 Subject: Transmission remote GUI proxy_protocol broken header In-Reply-To: <20160920194956.GP11677@daoine.org> References: <20160920194956.GP11677@daoine.org> Message-ID: <9e9dbada34b7a9aca0988efd84a178ce.NginxMailingListEnglish@forum.nginx.org> Hi, the log come from nginx error log. I simply start the Transmission remote GUI and I get only this in error log: 2016/09/21 10:32:13 [error] 3909#0: *327 broken header: "> :  X,u \kc ; J=?XfoVr_< 0 , ( $  k j i h 9 8 7 6 2 . * &   = 5 / + ' #  g @ ? > 3 2 1 0 E D C B 1 - ) %   < / A             g  127.0.0.1           #   " while reading PROXY protocol, client: 127.0.0.1, server: 127.0.0.1:443 As you can see there's no url in log but I guess it asks first for: POST https://adrhc.go.ro/transmission/rpc with Authorization and X-Transmission-Session-Id headers. I tried that and I get: {"arguments":{},"result":"no method name"} so seems to work. After this request I guess it asks for e.g.: GET /announce?info_hash=...&peer_id=...&port=...&uploaded=0&downloaded=0&left=809472758&numwant=80&key=...&compact=1&supportcrypto=1&requirecrypto=1&event=started HTTP/1.1", host: "tracker.publichd.eu" and may be this is where it fails. About the "simple reproducible test case" the patch from my previous post where you helped me https://forum.nginx.org/read.php?2,269623 only affect Location header in redirects (while the port is exactly 443) so I'm sure it has no impact; if by any means would have an impact that would be a 404 http code because the redirect would go to the http:80 alternative of my web site and there I have no /transmission location; or the redirect would go to http:443 with no ssl so again 404. On the other hand in order to know where to place the request stunnel is doing it's stuff based on sni - though I doubt it's breaking something. By the way, wile still transmission fails, with the same setup the web client https://adrhc.go.ro/transmission/ works fine. So maybe transmission is behaving/requesting somehow different comparing to a browser - I opened a token on transmission's site too. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269662,269717#msg-269717 From nginx-forum at forum.nginx.org Wed Sep 21 09:16:45 2016 From: nginx-forum at forum.nginx.org (bhagt) Date: Wed, 21 Sep 2016 05:16:45 -0400 Subject: ssl handshake fail when proxy between two tomcat with mutual authentication In-Reply-To: References: <9f165963da2af2f5fad4a68f555cef2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3978a550f6a8356991a9e7dd890b985b.NginxMailingListEnglish@forum.nginx.org> Hi all, I have configured nginx to do mutual authentication to a loadbalancer (ssl-offloading) which sends the http traffic to a webserver with virtual hosts. Keep getting the following error: SSL_do_handshake() failed (SSL: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream if run nginx in debug mode i only see a small ssl client-hello. But if i use openssl: openssl s_client -state -debug -showcerts -verify 0 -connect :443 i can see the handshake. Any help/lead would be appreciated. Regards, Bhagt Posted at Nginx Forum: https://forum.nginx.org/read.php?2,241171,269719#msg-269719 From mark.mcdonnell at buzzfeed.com Wed Sep 21 10:19:43 2016 From: mark.mcdonnell at buzzfeed.com (Mark McDonnell) Date: Wed, 21 Sep 2016 11:19:43 +0100 Subject: What is "seconds with milliseconds resolution" Message-ID: Hello, I'm not sure I really understand the `msec` embedded variable. I'm getting the value back as `1474452771.178` It's described as "seconds with milliseconds resolution", but I'm not sure what that really means (maths isn't a strong skill for me). How do I convert the number into seconds? So I can get a better idea of the relation of this value. Thanks M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanvila at unex.es Wed Sep 21 10:39:40 2016 From: sanvila at unex.es (Santiago Vila) Date: Wed, 21 Sep 2016 11:39:40 +0100 Subject: What is "seconds with milliseconds resolution" In-Reply-To: References: Message-ID: <20160921103940.GA5080@dell> On Wed, Sep 21, 2016 at 11:19:43AM +0100, Mark McDonnell wrote: > I'm not sure I really understand the `msec` embedded variable. > > I'm getting the value back as `1474452771.178` That would be the number of seconds since the "epoch" (1970-01-01 00:00 UTC), similar to "date +%s" but more accurate. > It's described as "seconds with milliseconds resolution", but I'm not sure > what that really means (maths isn't a strong skill for me). It just means the number is a number of seconds (i.e. the second is the unit of measure), and you can expect the number to have three decimal places after the point. > How do I convert the number into seconds? The number is already in seconds :-) From mark.mcdonnell at buzzfeed.com Wed Sep 21 10:57:09 2016 From: mark.mcdonnell at buzzfeed.com (Mark McDonnell) Date: Wed, 21 Sep 2016 11:57:09 +0100 Subject: What is "seconds with milliseconds resolution" In-Reply-To: <20160921103940.GA5080@dell> References: <20160921103940.GA5080@dell> Message-ID: Thanks Santiago, that actually makes perfect sense. Think I just needed the words read back to me in a different order or something lol ?\_(?)_/? On Wed, Sep 21, 2016 at 11:39 AM, Santiago Vila wrote: > On Wed, Sep 21, 2016 at 11:19:43AM +0100, Mark McDonnell wrote: > > > I'm not sure I really understand the `msec` embedded variable. > > > > I'm getting the value back as `1474452771.178` > > That would be the number of seconds since the "epoch" (1970-01-01 00:00 > UTC), > similar to "date +%s" but more accurate. > > > It's described as "seconds with milliseconds resolution", but I'm not > sure > > what that really means (maths isn't a strong skill for me). > > It just means the number is a number of seconds (i.e. the second is > the unit of measure), and you can expect the number to have three > decimal places after the point. > > > How do I convert the number into seconds? > > The number is already in seconds :-) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Mark McDonnell | BuzzFeed | Senior Software Engineer | @integralist https://keybase.io/integralist 40 Argyll Street, 2nd Floor, London, W1F 7EB -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Wed Sep 21 15:34:11 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 21 Sep 2016 08:34:11 -0700 Subject: nginx reverse proxy causing TCP queuing spikes Message-ID: I've been struggling with http response time slowdowns and corresponding spikes in my TCP Queuing graph in munin. I'm using nginx as a reverse proxy to apache which then hands off to my backend, and I think the proxy_read_timeout line in my nginx config is at least contributing to the issue. Here is all of my proxy config: proxy_read_timeout 60m; proxy_pass http://127.0.0.1:8080; I think this means I'm leaving connections open for 60 minutes after the last server response which sounds like a bad thing. However, some of my admin pages need to run for a long time while they wait for the server-side stuff to execute. I only use the proxy_read_timeout directive on my admin locations and I'm experiencing the TCP spikes and http slowdowns during the exact hours that the admin stuff is in use. - Grant From francis at daoine.org Wed Sep 21 16:32:06 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Sep 2016 17:32:06 +0100 Subject: Start nginx worker process with same user as master process In-Reply-To: <6defa4ec7f35acbc7bf1642ba3cf89e8.NginxMailingListEnglish@forum.nginx.org> References: <20160920195850.GQ11677@daoine.org> <6defa4ec7f35acbc7bf1642ba3cf89e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160921163206.GU11677@daoine.org> On Wed, Sep 21, 2016 at 01:51:18AM -0400, Sushma wrote: Hi there, > I have explicitly changed permissions for directories as required. > But the problem I am facing here is nginx reload fails due to permission > denied for proxy_temp folder. > I had explicitly changed permissions for this folder so that it could be > accesssed by user abc (user with which nginx is running). > However, when buffering happens, I see that the folder permissions are > changed to be owned by nginx user. Hence my reload fails due to permissions > issues. What I can I do to recreate this issue? What small configuration file can be used to show this problem? Copy-paste, do not re-type, because that can introduce changes. For this proxy_temp folder, what is the "ls -ld" or "ls -ldZ" output for every component of it? That is: ls -ldZ / ls -ldz /usr ls -ldZ /usr/local and so forth. > Is there something I am missing here? Does nginx remember the user with > which it was installed and is that why these folder permissions are > changing? It shouldn't. Have you any other processes running that check or change folder permissions? Or does a containing folder have a setting which says that everything new inside it must have certain ownership? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 21 16:43:01 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Sep 2016 17:43:01 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <706f2a27e2b9116a62c41c26d2f0e3a8.NginxMailingListEnglish@forum.nginx.org> References: <20160920193704.GO11677@daoine.org> <706f2a27e2b9116a62c41c26d2f0e3a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160921164301.GV11677@daoine.org> On Wed, Sep 21, 2016 at 03:25:04AM -0400, adrhc wrote: Hi there, > Indeed the solution might look strange but it works (test it with e.g. https > or http ://adrhc.go.ro/ffp). It is good that it works. The http redirect there does not include the port; the https redirect does include the port, and it is the default port for https. I'm just a bit surprised that "port_in_redirect off" does not also work. But that's ok -- I'm often surprised. > Would be nicer if would exists a variable like let's say $override_ssl which > to force nginx consider it run a ssl request with all the consequences. That variable will probably only exist after someone shows a need for it, and after someone does the work to write the code. I think that your use case is reasonable -- hide nginx-doing-http behind an external ssl terminator -- but I don't know what is the set of conditions under which you would want this ssl-rewrite to happen, and how you would go about configuring that. (You want it sort-of per-server, but not really, since you only want it if proxy_protocol is in use and indicates that the initial request was https.) It looks like nobody else has had that particular use case, and was willing to put the effort in to make it an nginx configurable. > Again I thank you for your support. You're welcome. The patch you have, you can carry for as long as you need, so it not being added to stock nginx should not block you at all. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 21 16:54:53 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Sep 2016 17:54:53 +0100 Subject: Transmission remote GUI proxy_protocol broken header In-Reply-To: <9e9dbada34b7a9aca0988efd84a178ce.NginxMailingListEnglish@forum.nginx.org> References: <20160920194956.GP11677@daoine.org> <9e9dbada34b7a9aca0988efd84a178ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160921165453.GW11677@daoine.org> On Wed, Sep 21, 2016 at 04:05:22AM -0400, adrhc wrote: > Hi, the log come from nginx error log. I simply start the Transmission > remote GUI and I get only this in error log: This looks to me that the thing that is writing to nginx, is not writing what nginx expects to read. What is the thing writing to nginx? (stunnel, I think) What should it be writing? (plain http after a proxy_protocol line, I think. How is it configured?) What version of proxy_protocol is stunnel writing? For the rest of the requests -- the log files should show, in order, what requests are made and what responses are received. If there's a break, it is probably just before the first request that is not in the logs. > By the way, wile still transmission fails, with the same setup the web > client https://adrhc.go.ro/transmission/ works fine. So maybe transmission > is behaving/requesting somehow different comparing to a browser - I opened a > token on transmission's site too. Is "transmission" something other than a https client? If it is trying to speak something other than http wrapped in tls, it is unlikely that nginx will be able to process the requests. f -- Francis Daly francis at daoine.org From timm_nt at yahoo.ca Wed Sep 21 18:28:59 2016 From: timm_nt at yahoo.ca (Tim) Date: Wed, 21 Sep 2016 14:28:59 -0400 Subject: Will nginx be relinked to pick up openssl-1.0.2i? Message-ID: <3A6AF4E6-57A7-40F4-A199-72B359031A6D@yahoo.ca> Hi all, This may not be the right list but do you know if the Windows nginx binaries will be relinked to pick up the new openssl-1.0.2 which will be released tomorrow (Sept 22)? Tim From nginx-forum at forum.nginx.org Wed Sep 21 19:30:31 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 21 Sep 2016 15:30:31 -0400 Subject: access_log format $remote_user anonymous question Message-ID: So in my access logs all my other logs the $remote_user is empty. But for only this one single IP that keeps making requests the $remote_user has a value. CF-Real-IP: 176.57.129.88 - CF-Server: 10.108.22.151 - anonymous [21/Sep/2016:18:54:52 +0100] "GET /media/files/29/96/2b/701f56b345ce531192645ddb532a8fd7.mp4 HTTP/1.1" Status:503 206 "http://www.networkflare.com/" "Mozilla/5.0 (;FW 2.0.7070.p; Windows NT 6.1; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; Tablet PC 2.0; rv:11.0) like Gecko" Why is the $remote_user value on this person anonymous where everyone else it is empty ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269734,269734#msg-269734 From steve at greengecko.co.nz Wed Sep 21 20:08:37 2016 From: steve at greengecko.co.nz (steve) Date: Thu, 22 Sep 2016 08:08:37 +1200 Subject: Start nginx worker process with same user as master process In-Reply-To: <6defa4ec7f35acbc7bf1642ba3cf89e8.NginxMailingListEnglish@forum.nginx.org> References: <20160920195850.GQ11677@daoine.org> <6defa4ec7f35acbc7bf1642ba3cf89e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <57E2E8C5.1070905@greengecko.co.nz> Hi, On 09/21/2016 05:51 PM, Sushma wrote: > Thanks for the details. > I have explicitly changed permissions for directories as required. > But the problem I am facing here is nginx reload fails due to permission > denied for proxy_temp folder. > I had explicitly changed permissions for this folder so that it could be > accesssed by user abc (user with which nginx is running). > However, when buffering happens, I see that the folder permissions are > changed to be owned by nginx user. Hence my reload fails due to permissions > issues. > Is there something I am missing here? Does nginx remember the user with > which it was installed and is that why these folder permissions are > changing? > You need to restart nginx, not reload it. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at forum.nginx.org Wed Sep 21 21:17:22 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 21 Sep 2016 17:17:22 -0400 Subject: access_log format $remote_user anonymous question In-Reply-To: References: Message-ID: http://nginx.org/en/docs/http/ngx_http_core_module.html#variables $remote_user user name supplied with the Basic authentication Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269734,269738#msg-269738 From nginx-forum at forum.nginx.org Wed Sep 21 21:28:26 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 21 Sep 2016 17:28:26 -0400 Subject: access_log format $remote_user anonymous question In-Reply-To: References: Message-ID: <32eee3584912464981f36e05632ef479.NginxMailingListEnglish@forum.nginx.org> Thanks for the information so based of what that resource says and from what I understand surely that field should only say "anonymous" or "username" if on those files / folders in my Nginx config I use "auth_basic" ? http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html The fact they are inputting that header unlike everyone else just alerts me. Because I don't use auth_basic anywhere would anything bad happen if I did the following. if($remote_user != "^$") { #Block requests where the user is not empty / missing return 444; } Their IP is also listed on stopforumspam's database what also raises suspicion further https://www.stopforumspam.com/ipcheck/176.57.129.88 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269734,269740#msg-269740 From nginx-forum at forum.nginx.org Wed Sep 21 21:49:30 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 21 Sep 2016 17:49:30 -0400 Subject: access_log format $remote_user anonymous question In-Reply-To: <32eee3584912464981f36e05632ef479.NginxMailingListEnglish@forum.nginx.org> References: <32eee3584912464981f36e05632ef479.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1887fc1afb2717e6ca26fbecbd63f786.NginxMailingListEnglish@forum.nginx.org> Its just an attempt to gain access, ignore it, we get 1000's a day. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269734,269741#msg-269741 From bycn82 at gmail.com Thu Sep 22 04:11:29 2016 From: bycn82 at gmail.com (Bill Yuan) Date: Thu, 22 Sep 2016 12:11:29 +0800 Subject: Question about the url rewrite before pass Message-ID: ?Hello, i am looking for a proxy which can "bounce" the request, which is not a classic proxy. I want it works in this way. e.g. a proxy is running a 192.168.1.1 and when i want to open www.yahoo.com, i just need call http://192.168.1.1/www.yahoo.com the proxy can pickup the the host "www.yahoo.com" from the URI, and retrieve the info for me?, so it need to get the new $host from $location, and remove the $host from the $location before proxy pass it. it is doable via nginx? Regards Bill ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Sep 22 07:25:24 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 22 Sep 2016 08:25:24 +0100 Subject: access_log format $remote_user anonymous question In-Reply-To: <32eee3584912464981f36e05632ef479.NginxMailingListEnglish@forum.nginx.org> References: <32eee3584912464981f36e05632ef479.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160922072524.GX11677@daoine.org> On Wed, Sep 21, 2016 at 05:28:26PM -0400, c0nw0nk wrote: Hi there, > Thanks for the information so based of what that resource says and from what > I understand surely that field should only say "anonymous" or "username" if > on those files / folders in my Nginx config I use "auth_basic" ? No. That variable has a value if the request includes the Authorization header that indicates Basic authentication. It has a value whether or not the password provided is correct. If you don't use auth_basic, or have not otherwise confirmed that the provided password is valid and matches the username provided, then you have no reason to believe that the provided name is "real". > Because I don't use auth_basic anywhere would anything bad happen if I did > the following. > > if($remote_user != "^$") { #Block requests where the user is not empty / > missing > return 444; > } "if" uses "=" for string match, and "~" for regex match. So your idea is sound, but the implementation is wrong. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 22 09:54:35 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Thu, 22 Sep 2016 05:54:35 -0400 Subject: Transmission remote GUI proxy_protocol broken header In-Reply-To: <20160921165453.GW11677@daoine.org> References: <20160921165453.GW11677@daoine.org> Message-ID: <058448ba63653d78e148a8aa669ed24d.NginxMailingListEnglish@forum.nginx.org> Hi, here's some clarifications: What is the thing writing to nginx? (stunnel, I think) stunnel according to the setup: Transmission remote GUI:443 -> sshttp:443 -> stunnel:1443 -> nginx:127.0.0.1:443 (no ssl, with listen ... proxy_protocol, port_in_redirect on) How is it configured? [tls] accept = 192.168.1.31:1443 connect = 127.0.0.1:1081 protocol = proxy [ssh] sni = tls:tti.go.ro connect = 127.0.0.1:22 renegotiation = no debug = 5 cert = /home/adr/apps/etc/nginx/certs/adrhc.go.ro-server-pub.pem key = /home/adr/apps/etc/nginx/certs/adrhc.go.ro-server-priv-no-pwd.pem [tls to any http] sni = tls:* # using nginx proxy_protocol (is http though using 443!): connect = 127.0.0.1:443 protocol = proxy What version of proxy_protocol is stunnel writing? it's the one from nginx 1.11.3 ... Is "transmission" something other than a https client? - it's this: transmission-daemon, 2.84-3ubuntu3, amd64, lightweight BitTorrent client (daemon) with this configuration in nginx: # http://127.0.0.1:9091/transmission/web/ location /transmission/ { proxy_pass http://127.0.0.1:9091/transmission/; proxy_redirect http://127.0.0.1:9091/ /; proxy_cookie_domain 127.0.0.1:9091 adrhc.go.ro; proxy_set_header Host 127.0.0.1:9091; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10M; proxy_connect_timeout 120; proxy_read_timeout 300; } If it is trying to speak something other than http wrapped in tls, it is unlikely that nginx will be able to process the requests. I gues it tries not because it's working fine with https://adrhc.go.ro/transmission/ but when stunnel is not involved e.g.: Transmission remote GUI:443 -> sshttp:443 -> nginx:127.0.0.1:1443 (with ssl, without listen ... proxy_protocol, port_in_redirect off) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269662,269744#msg-269744 From nginx-forum at forum.nginx.org Thu Sep 22 09:56:47 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Thu, 22 Sep 2016 05:56:47 -0400 Subject: Start nginx worker process with same user as master process In-Reply-To: <20160921163206.GU11677@daoine.org> References: <20160921163206.GU11677@daoine.org> Message-ID: <32ad2d9f59cbb957b8a99673135c2f5e.NginxMailingListEnglish@forum.nginx.org> Thanks a lot Francis. Apparently nginx was once started as root. So automatically the ownership of the temp folders got changed to nginx user. This explains the sudden permission change even though I had set it explictly. Thanks for your help. Cheers, Sushma Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269006,269745#msg-269745 From nginx-forum at forum.nginx.org Thu Sep 22 10:35:32 2016 From: nginx-forum at forum.nginx.org (gromiak) Date: Thu, 22 Sep 2016 06:35:32 -0400 Subject: proxy cache + pseudo-streaming for mp4/flv In-Reply-To: References: Message-ID: <9a7f1c7f0805e1dcdf43f74490a4ab45.NginxMailingListEnglish@forum.nginx.org> Hi, link to patch is not working, could you please provide the new one? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266660,269746#msg-269746 From black.fledermaus at arcor.de Thu Sep 22 11:13:15 2016 From: black.fledermaus at arcor.de (basti) Date: Thu, 22 Sep 2016 13:13:15 +0200 Subject: always run same script in location Message-ID: <9cbdb6e1-55ef-ec32-f36d-804982f5e9e7@arcor.de> Hello, i have a script where i can upload files. the uri is like https://example.com/foo/bar.pl the location looks like location ~ ^/foo/(.*.\.pl|cgi)$ { ... } then a upload url is generatred https://example.com/foo/u/f28c104/df3d-45ce/example.txt the location for the uploaded files looks like location ~ ^/foo/u/(.+?)(/.*)$ { fastcgi_param SCRIPT_FILENAME /www/example.com/foo/dl.pl; ... } all is matching expact: I need to download the file via dl.pl script. It looks like that the script is not called at this location. I only the the "default download" menu of the browser Best regards ps: in apache there is a SetHandler and an Action for doing that. how can i do in ngx? From nginx-forum at forum.nginx.org Thu Sep 22 11:57:17 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Thu, 22 Sep 2016 07:57:17 -0400 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: <20160921164301.GV11677@daoine.org> References: <20160921164301.GV11677@daoine.org> Message-ID: I'm just a bit surprised that "port_in_redirect off" does not also work. But that's ok -- I'm often surprised. There's a "if" in src/http/ngx_http_header_filter_module.c which changes port's value from 443 to 0 when on ssl + port initially 443 so https://adrhc.go.ro/ffp_0.7_armv5 would redirect to http when port_in_redirect is off. "... but I don't know what is the set of conditions under which you would want this ssl-rewrite to happen, and how you would go about configuring that." I'm not sure I understand what you mean (my bad english); the entire setup is one allowing me to access my home server through the corporate firewall wile not breaking what I already have (my web sites): browser (ssl) -> sshttp:443 -> stunnel:1443 -> nginx:443:listen proxy_protocol:no ssl ssh client -> sshttp:443 -> ssh:22 -> ssh traffic detectable by firewall (I don't want that) ssh client -> stunnel in client mode:local-custom-port -> sshttp:443 -> stunnel:1443 -> ssh:22 -> firewall sees only ssl traffic (better) See https://adrhc.go.ro/wordpress/ssh-http-and-https-multiplexing/ for instructions on full setup. "It looks like nobody else has had that particular use case ..." This seems odd for me; I'm sure I'm not the only guy starving for open ports to internet (only 80 and 443 allowed) :D Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269623,269748#msg-269748 From nginx-forum at forum.nginx.org Thu Sep 22 12:00:43 2016 From: nginx-forum at forum.nginx.org (mastercan) Date: Thu, 22 Sep 2016 08:00:43 -0400 Subject: Are there plans for Nginx supporting HTTP/2 server push? Message-ID: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> Is there something like a release timeline for HTTP/2 server push feature in Nginx? It would help make https connections faster and get rid of one TCP roundtrip. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269749,269749#msg-269749 From black.fledermaus at arcor.de Thu Sep 22 12:15:39 2016 From: black.fledermaus at arcor.de (basti) Date: Thu, 22 Sep 2016 14:15:39 +0200 Subject: always run same script in location In-Reply-To: <9cbdb6e1-55ef-ec32-f36d-804982f5e9e7@arcor.de> References: <9cbdb6e1-55ef-ec32-f36d-804982f5e9e7@arcor.de> Message-ID: <3ac88c0d-1a17-3382-d60d-ad31dbeec286@arcor.de> I have files by myself. the part of my conf looks like location ~ ^/foo/(.*.\.pl|cgi)$ { ... } location ~ ^/foo/d/(.+?)(/.*)$ { try_files foo /foo/dl.pl; } foo is a non existent file, so always dl.pl is executed On 22.09.2016 13:13, basti wrote: > Hello, > > i have a script where i can upload files. the uri is like > > https://example.com/foo/bar.pl > > the location looks like > > location ~ ^/foo/(.*.\.pl|cgi)$ { > ... > } > > then a upload url is generatred > > https://example.com/foo/u/f28c104/df3d-45ce/example.txt > > the location for the uploaded files looks like > > location ~ ^/foo/u/(.+?)(/.*)$ { > fastcgi_param SCRIPT_FILENAME /www/example.com/foo/dl.pl; > ... > } > > all is matching expact: > I need to download the file via dl.pl script. It looks like that the > script is not called at this location. > I only the the "default download" menu of the browser > > Best regards > > ps: > in apache there is a SetHandler and an Action for doing that. how can i > do in ngx? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From manole at amazon.com Thu Sep 22 13:12:21 2016 From: manole at amazon.com (Manole, Sorin) Date: Thu, 22 Sep 2016 13:12:21 +0000 Subject: nginx default unix domain socket permissions and umask Message-ID: <1FE37B17-52E2-468E-988D-ADE3065343E7@amazon.com> Hello, It seems that when nginx creates unix domain sockets as a result of the listen directive it assigns rw permissions for all users. This is probably because the bind() call which creates the file follows the process umask. Nginx sets the umask to 0 which is the most relaxed setting. Is there a way to control the permissions assigned at creation to unix domain sockets created by nginx? Is there a deep reason to always set the umask to 0? Would it be better to let the user decide the umask and inherit it from the process starting nginx? Thanks. Amazon Development Center (Romania) S.R.L. registered office: 3E Palat Street, floor 2, Iasi, Iasi County, Iasi 700032, Romania. Registered in Romania. Registration number J22/2621/2005. From sven.falempin at gmail.com Thu Sep 22 15:04:15 2016 From: sven.falempin at gmail.com (sven falempin) Date: Thu, 22 Sep 2016 11:04:15 -0400 Subject: Tar gz shenanigans in a location Message-ID: Nginx readers, i have a webdav like server that serv files, and access it through nginx, it is actually a subversion repo, so the files (and directory) are listed in an ugly html page, not recursively. /directory/files1 /directory/files2 [..] I am fishing for ideas to do something like location /directroy.tar.gz { return tar_ball /directory/*; } which would do tar czvf - /directory/* Of course this assume i wold somehow get the file list first ... Please share clever ideas -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 22 16:01:37 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Sep 2016 19:01:37 +0300 Subject: Will nginx be relinked to pick up openssl-1.0.2i? In-Reply-To: <3A6AF4E6-57A7-40F4-A199-72B359031A6D@yahoo.ca> References: <3A6AF4E6-57A7-40F4-A199-72B359031A6D@yahoo.ca> Message-ID: <20160922160137.GU73038@mdounin.ru> Hello! On Wed, Sep 21, 2016 at 02:28:59PM -0400, Tim wrote: > This may not be the right list but do you know if the Windows > nginx binaries will be relinked to pick up the new openssl-1.0.2 > which will be released tomorrow (Sept 22)? As far as I can see, the only issue marked as "high" doesn't affect nginx as it doesn't allow renegotiation. And most of the other issues doesn't apply as well, or they are mostly theoretical. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Thu Sep 22 18:45:22 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 22 Sep 2016 19:45:22 +0100 Subject: Transmission remote GUI proxy_protocol broken header In-Reply-To: <058448ba63653d78e148a8aa669ed24d.NginxMailingListEnglish@forum.nginx.org> References: <20160921165453.GW11677@daoine.org> <058448ba63653d78e148a8aa669ed24d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160922184522.GY11677@daoine.org> On Thu, Sep 22, 2016 at 05:54:35AM -0400, adrhc wrote: Hi there, > What is the thing writing to nginx? (stunnel, I think) > stunnel according to the setup: I strongly suspect that your stunnel is not doing what you want it to do. If you "tcpdump" the traffic out of stunnel; or if you replace nginx with a "netcat" listener so you can see the bytes that are transferred; I think you will see something other than plain http coming out of it. > How is it configured? > [tls to any http] > sni = tls:* > # using nginx proxy_protocol (is http though using 443!): > connect = 127.0.0.1:443 > protocol = proxy https://www.stunnel.org/static/stunnel.html, in the "sni=" section, says """The connect option of the slave service is ignored when the protocol option is specified, as protocol connects to the remote host before TLS handshake.""" I suspect that that is related to what stunnel is doing. Have you any way of verifying that stunnel can do what you want, and does do what you want with this configuration? > What version of proxy_protocol is stunnel writing? > it's the one from nginx 1.11.3 ... nginx is listening (I think) for proxy-protocol version 1. If stunnel is writing version 2, things will go wrong. > If it is trying to speak something other than http wrapped in tls, > it is unlikely that nginx will be able to process the requests. > I gues it tries not because it's working fine with > https://adrhc.go.ro/transmission/ but when stunnel is not involved e.g.: > Transmission remote GUI:443 -> sshttp:443 -> nginx:127.0.0.1:1443 (with ssl, > without listen ... proxy_protocol, port_in_redirect off) Ok, so from that you can read that nginx access log to see what the first request that "transmission" makes is. Then you can see whether that gets to your no-ssl nginx on port 443. I think you have shown that it does not. If you are interested in testing, it might be worth seeing what happens if you put stunnel in front of nginx-ssl-proxy-protocol, or in front of nginx-ssl, or in front of nginx without proxy-protocol. Depending on the bytes that make it to nginx and how hey are interpreted, that might point at whether the problem is with stunnel writing, or with nginx reading, in the original case that you care about. Good luck with it, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Thu Sep 22 19:27:46 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 22 Sep 2016 12:27:46 -0700 Subject: (Semi-OT) Clickjacking countermeasure Message-ID: <20160922192746.5517396.81207.11020@lazygranch.com> I ran one of these website inspection services on my website and it was deemed to be subject to Clickjacking. This might be a false positive since I don't use frames, but the info on this link was enough to make the error go away. I chose "DENY" since I don't use frames.? https://geekflare.com/add-x-frame-options-nginx/ ? The inspection was from tinfoilsecurity.com. If you are blocking AWS (and you should be from Web ports ), you will have to make an exception for their IP. From nginx-forum at forum.nginx.org Thu Sep 22 20:34:42 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 22 Sep 2016 16:34:42 -0400 Subject: (Semi-OT) Clickjacking countermeasure In-Reply-To: <20160922192746.5517396.81207.11020@lazygranch.com> References: <20160922192746.5517396.81207.11020@lazygranch.com> Message-ID: <3ea173a2e8516f0efea2be795e5bf0a2.NginxMailingListEnglish@forum.nginx.org> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet Inside your tags. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269763,269773#msg-269773 From mdounin at mdounin.ru Thu Sep 22 20:35:16 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Sep 2016 23:35:16 +0300 Subject: nginx default unix domain socket permissions and umask In-Reply-To: <1FE37B17-52E2-468E-988D-ADE3065343E7@amazon.com> References: <1FE37B17-52E2-468E-988D-ADE3065343E7@amazon.com> Message-ID: <20160922203516.GY73038@mdounin.ru> Hello! On Thu, Sep 22, 2016 at 01:12:21PM +0000, Manole, Sorin wrote: > Hello, > > It seems that when nginx creates unix domain sockets as a result > of the listen directive it assigns rw permissions for all users. > This is probably because the bind() call which creates the file > follows the process umask. Nginx sets the umask to 0 which is > the most relaxed setting. > > Is there a way to control the permissions assigned at creation > to unix domain sockets created by nginx? I don't think so. If you want to limit access to unix sockets created by nginx, most trivial solution would be to create them in a directory with appropriate permissions. > Is there a deep reason to always set the umask to 0? Would it be > better to let the user decide the umask and inherit it from the > process starting nginx? The umask is set to 0 for nginx to be able to control permissions when explicitly configured (for example when saving files using proxy_store, http://nginx.org/r/proxy_store_access). -- Maxim Dounin http://nginx.org/ From lists at lazygranch.com Thu Sep 22 20:48:32 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 22 Sep 2016 13:48:32 -0700 Subject: (Semi-OT) Clickjacking countermeasure In-Reply-To: <3ea173a2e8516f0efea2be795e5bf0a2.NginxMailingListEnglish@forum.nginx.org> References: <20160922192746.5517396.81207.11020@lazygranch.com> <3ea173a2e8516f0efea2be795e5bf0a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160922204832.5517396.68987.11027@lazygranch.com> I saw that, but I took the path of least resistance. The method I mentioned was sufficient ?to pass the tinfoilsecurity.com test. To tinfoils's credit, they provided three references on Clickjacking, one of which is the website you suggested. ? Original Message ? From: c0nw0nk Sent: Thursday, September 22, 2016 1:34 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: (Semi-OT) Clickjacking countermeasure https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet Inside your tags. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269763,269773#msg-269773 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Sep 22 20:57:28 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 22 Sep 2016 16:57:28 -0400 Subject: (Semi-OT) Clickjacking countermeasure In-Reply-To: <20160922204832.5517396.68987.11027@lazygranch.com> References: <20160922204832.5517396.68987.11027@lazygranch.com> Message-ID: <3bc21c9c25e8dcbbf8a4e9aa5df71627.NginxMailingListEnglish@forum.nginx.org> If you read the OWASP page it will also mention about header stripping etc and proxies that will remove the X-Frames headers there is no real way to stop proxies framing your site but the X-Frame-Options combined with that JavaScript is a good way to start it will stop the majority. Also break their proxies is what I like to do. For example I combine it with not allowing people to browse with JavaScript disabled. (this is good for adverts too since ads use JavaScript so why would you let people browse with JavaScript disabled ?) There are some proxies that will still get through for example this one shows persistence but block their IP's and problem solved https://www.hidemyass.com/proxy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269763,269776#msg-269776 From lists at lazygranch.com Thu Sep 22 21:05:48 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 22 Sep 2016 14:05:48 -0700 Subject: (Semi-OT) Clickjacking countermeasure In-Reply-To: <3bc21c9c25e8dcbbf8a4e9aa5df71627.NginxMailingListEnglish@forum.nginx.org> References: <20160922204832.5517396.68987.11027@lazygranch.com> <3bc21c9c25e8dcbbf8a4e9aa5df71627.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160922210548.5517396.76900.11030@lazygranch.com> I serve no ads. I even pulled my piwik so that my sites can be surfed no script.? Can you clickjack an encrypted page? How would the browser handle two certs? ? Original Message ? From: c0nw0nk Sent: Thursday, September 22, 2016 1:57 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: (Semi-OT) Clickjacking countermeasure If you read the OWASP page it will also mention about header stripping etc and proxies that will remove the X-Frames headers there is no real way to stop proxies framing your site but the X-Frame-Options combined with that JavaScript is a good way to start it will stop the majority. Also break their proxies is what I like to do. For example I combine it with not allowing people to browse with JavaScript disabled. (this is good for adverts too since ads use JavaScript so why would you let people browse with JavaScript disabled ?) There are some proxies that will still get through for example this one shows persistence but block their IP's and problem solved https://www.hidemyass.com/proxy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269763,269776#msg-269776 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 23 09:20:56 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Fri, 23 Sep 2016 05:20:56 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) Message-ID: Hello, [root at web1 ~]# nginx -v nginx version: nginx/1.11.4 We are now after 13 days we observer suddenly in nginx logs this in an intempestive manner, and causing nginx to reload, causing slow down on server : posix_memalign(16, 16384) failed (12: Cannot allocate memory) This happens after our upgrade to last nginx version through nDeploy. I called in nginx sysadmin, Ndeploy sysadmin too, and finally cloudlinux support which made an incredible job investigating the issue over 7 days by enabling multiple kernel debug tools to find out what is going on. All nginx/linux settings has been tweaked/verified. Issue can't be solved, and about 5 guys has broken their head on the issue, without being able to solve. We know all the basic, even advanced, and experts were in. Cloudlinux support says this is the cause, and you need nginx expert to find out why nginx beheave likes this : >From the information we collected it appears that nginx is really changing his ulimits: # grep nginx /home/abackupnomem3.log | tail nginx-792752 [009] 5438179.898678: setrlimit: (sys_setrlimit+0x63/0x70 Conclusion is that nginx manage those rlimits. This is not a solution, but a way for you where to dig more. This was added : ulimit -q unlimited in etc/init.d/nginx : start() { echo -n $"Starting $prog: " ulimit -n 64000 ulimit -q unlimited daemon --pidfile=${pidfile} ${nginx} -c ${conffile} RETVAL=$? echo [ $RETVAL = 0 ] && touch ${lockfile} return $RETVAL } Anyone has a clue ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269787#msg-269787 From nginx-forum at forum.nginx.org Fri Sep 23 10:42:18 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Fri, 23 Sep 2016 06:42:18 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: References: Message-ID: <136082294a1e73a4d95b51e411ec0573.NginxMailingListEnglish@forum.nginx.org> I add my config at server level #Core Functionality user nobody; worker_processes auto; worker_rlimit_nofile 50000; thread_pool iopool threads=32 max_queue=65536; pid /var/run/nginx.pid; error_log /var/log/nginx/error_log; #error_log /home/abackup/debug.log debug; #Load Dynamic Modules #include /etc/nginx/conf.d/dynamic_modules_custom.conf; events { worker_connections 2048; use epoll; multi_accept on; accept_mutex off; } #Settings For other core modules like for example the stream module include /etc/nginx/conf.d/main_custom_include.conf; #Settings for the http core module include /etc/nginx/conf.d/http_settings.conf; *************************** http { sendfile on; sendfile_max_chunk 512k; aio threads=iopool; directio 50m; #Serve Large files like media files using directio tcp_nopush on; tcp_nodelay on; keepalive_timeout 60; keepalive_disable msie6 safari; types_hash_max_size 2048; server_tokens off; client_max_body_size 128m; client_body_buffer_size 256k; map_hash_bucket_size 128; map_hash_max_size 2048; #Tweak timeout settings below in case of a DOS attack client_header_timeout 1m; client_body_timeout 1m; reset_timedout_connection on; connection_pool_size 512; client_header_buffer_size 4k; large_client_header_buffers 4 32k; request_pool_size 8k; output_buffers 4 32k; postpone_output 1460; #FastCGI fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; # the below options depend on theoretical maximum of your PHP script run-time fastcgi_read_timeout 300; fastcgi_send_timeout 300; server_names_hash_max_size 256000; server_names_hash_bucket_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; # Open File Cache open_file_cache max=8192 inactive=5m; open_file_cache_valid 5m; open_file_cache_min_uses 2; open_file_cache_errors on; # Logging Settings open_log_file_cache max=1000 inactive=20s valid=1m min_uses=2; #Mapping $msec to $sec so that we dont break cPanel bandwidth calculator map $msec $sec { ~^(?P.+)\. $secres; } log_format bytes_log "$sec $bytes_sent ."; log_not_found off; access_log off; # Micro-caching nginx proxy_cache_path /var/cache/nginx/microcaching keys_zone=micro:20m levels=1:2 inactive=900s max_size=2000m; proxy_cache micro; proxy_cache_lock on; proxy_cache_valid 200 1s; proxy_cache_use_stale updating; proxy_cache_bypass $cookie_nocache $arg_nocache; # GeoIP # Uncomment to enable #geoip_country /usr/share/GeoIP/GeoLiteCountry.dat; #geoip_city /usr/share/GeoIP/GeoLiteCity.dat; #Limit Request Zone conf include /etc/nginx/conf.d/limit_request_custom.conf; # #CloudFare RealIP conf include /etc/nginx/conf.d/cloudfare_realip.conf; # #FastCGI and PROXY cache config include /etc/nginx/conf.d/nginx_cache.conf; # #Phusion Passenger Setting include /etc/nginx/conf.d/passenger.conf; # #Custom Include File where you can include any custom settings include /etc/nginx/conf.d/custom_include.conf; # # Virtual Host Configs include /etc/nginx/conf.d/default_server.conf; include /etc/nginx/sites-enabled/*.conf; } ********************************** vhost level ********************************** location / { limit_req zone=FLOODPROTECT burst=100; limit_conn PERIP 125; limit_conn PERSERVER 500; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 32k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_connect_timeout 300s; proxy_pass http://PROXYLOCATION; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; proxy_set_header Proxy ""; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269788#msg-269788 From nginx-forum at forum.nginx.org Fri Sep 23 11:36:04 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Fri, 23 Sep 2016 07:36:04 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <136082294a1e73a4d95b51e411ec0573.NginxMailingListEnglish@forum.nginx.org> References: <136082294a1e73a4d95b51e411ec0573.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3a3c01cbe9ac4a004369e1215a243813.NginxMailingListEnglish@forum.nginx.org> sysctl tweaked at maximum already # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Tweak for nginx workers/connections added 16/09/2016 for issue investigation on pisix error in nginx logs net.core.somaxconn = 512 net.core.netdev_max_backlog = 512 net.ipv4.tcp_max_syn_backlog = 20480 # Tweaks added 16/09/2016 for issue investigation on pisix error in nginx logs net.netfilter.nf_conntrack_max = 196608 net.nf_conntrack_max = 196608 # Tweaks added 19/09/2016 cloudlinux vm.max_map_count=655300 # Tweaks added 20/09/2016 net.core.rmem_default=262144 net.core.wmem_default=262144 net.core.rmem_max=262144 net.core.wmem_max=262144 # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Decrease the time default value for tcp_fin_timeout connection net.ipv4.tcp_fin_timeout = 15 # Decrease the time default value for tcp_keepalive_time connection net.ipv4.tcp_keepalive_time = 1800 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Enable TCP SYN Cookie Protection net.ipv4.tcp_syncookies = 1 # Increase the tcp-time-wait buckets pool size net.ipv4.tcp_max_tw_buckets = 1440000 # Turn off the tcp_sack net.ipv4.tcp_sack = 0 # Turn off the tcp_timestamps net.ipv4.tcp_timestamps = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 # Disable IPv6 autoconf net.ipv6.conf.all.autoconf = 0 net.ipv6.conf.default.autoconf = 0 net.ipv6.conf.eth0.autoconf = 0 net.ipv6.conf.all.accept_ra = 0 net.ipv6.conf.default.accept_ra = 0 net.ipv6.conf.eth0.accept_ra = 0 # Various vm.swappiness = 1 vm.disable_fs_reclaim=1 vm.dirty_background_ratio = 5 vm.dirty_ratio = 10 #Disable CloudLinux ptrace kernel.user_ptrace = 0 # Symlinks fs.enforce_symlinksifowner = 1 fs.symlinkown_gid = 99 # CageFS fs.proc_super_gid = 485 fs.proc_can_see_other_uid=0 fs.suid_dumpable=1 # SecureLinks Link Traversal Protection Allowd Group Id fs.protected_symlinks_allow_gid = 487 fs.fs.protected_hardlinks_allow_gid = 487 fs.file-max = 1048576 fs.protected_symlinks_create = 0 fs.protected_hardlinks_create = 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269792#msg-269792 From nginx-forum at forum.nginx.org Fri Sep 23 11:39:02 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Fri, 23 Sep 2016 07:39:02 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <3a3c01cbe9ac4a004369e1215a243813.NginxMailingListEnglish@forum.nginx.org> References: <136082294a1e73a4d95b51e411ec0573.NginxMailingListEnglish@forum.nginx.org> <3a3c01cbe9ac4a004369e1215a243813.NginxMailingListEnglish@forum.nginx.org> Message-ID: nginx -V nginx version: nginx/1.11.4 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.2h 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-openssl=./openssl-1.0.2h --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-ipv6 --with-http_v2_module --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-release-1.11.33.3-beta --add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.0/gems/passenger-5.0.30/src/nginx_module --add-module=ngx_cache_purge-2.3 --add-module=ngx_brotli --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269793#msg-269793 From mdounin at mdounin.ru Fri Sep 23 16:07:28 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Sep 2016 19:07:28 +0300 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: References: Message-ID: <20160923160728.GB73038@mdounin.ru> Hello! On Fri, Sep 23, 2016 at 05:20:56AM -0400, JohnCarne wrote: > Hello, > > [root at web1 ~]# nginx -v > nginx version: nginx/1.11.4 > > We are now after 13 days we observer suddenly in nginx logs this in an > intempestive manner, and causing nginx to reload, causing slow down on > server : posix_memalign(16, 16384) failed (12: Cannot allocate memory) > > This happens after our upgrade to last nginx version through nDeploy. > > I called in nginx sysadmin, Ndeploy sysadmin too, and finally cloudlinux > support which made an incredible job investigating the issue over 7 days by > enabling multiple kernel debug tools to find out what is going on. > > All nginx/linux settings has been tweaked/verified. Issue can't be solved, > and about 5 guys has broken their head on the issue, without being able to > solve. We know all the basic, even advanced, and experts were in. Just a basic hint, in case you haven't tried it yet: re-compile nginx without any 3rd party modules, and check if it helps. > Cloudlinux support says this is the cause, and you need nginx expert to find > out why nginx beheave likes this : > > From the information we collected it appears that nginx is really changing > his ulimits: > # grep nginx /home/abackupnomem3.log | tail > nginx-792752 [009] 5438179.898678: setrlimit: (sys_setrlimit+0x63/0x70 > > Conclusion is that nginx manage those rlimits. This is not a solution, but a > way for you where to dig more. The setrlimit() call is used by nginx to manage some limits it knows about and configured to manage. In particular, it is used for the worker_rlimit_nofile 50000; directive as seen in your config, and for the worker_rlimit_core directive. Details about the directives can be found here: http://nginx.org/r/worker_rlimit_core http://nginx.org/r/worker_rlimit_nofile They set RLIMIT_CORE and RLIMIT_NOFILE limits, nothing more, and have nothing to do with the memory allocation errors you see. -- Maxim Dounin http://nginx.org/ From philip.walenta at gmail.com Fri Sep 23 16:38:44 2016 From: philip.walenta at gmail.com (Philip Walenta) Date: Fri, 23 Sep 2016 11:38:44 -0500 Subject: upstream and proxy_pass behavior Message-ID: I'm trying to reduce the number of location blocks I need for an application by trying to route using a query parameter. My URL looks like this: https://www.example.com/api/v1/proxy/?tid=9999 If I do this in my location block: location ~* /api/v1/proxy { proxy_pass http://origin.$arg_tid:10001; } and have an upstream of: upstream origin.9999 { server 1.2.3.4; } I see an error of: *51 no resolver defined to resolve origin.9999, client: 5.6.7.8, server: www.example.com, request: "GET /api/v1/proxy/?tid=9999 HTTP/1.1", host: " www.example.com" - as if it only considers a DNS lookup, even though there is an upstream server block configured. However, if I change my location block (removing the port): location ~* /api/v1/proxy { proxy_pass http://origin.$arg_tid; } and have an upstream of: upstream origin.9999 { server 1.2.3.4:10001; } It works perfectly. Is there a reason the first example isn't working? It's very valuable to me to be able to pass the port to the upstream server as shown in the second example as it reduces the number of upstream blocks I need by a factor of 10 or more. Thanks in advance for any help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 23 18:03:54 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Fri, 23 Sep 2016 14:03:54 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: <5243ed5196be7798c789a53998786270.NginxMailingListEnglish@forum.nginx.org> Thank you for your feeback We reverted now to 1.11.3 with no brotly, and geoip module, (this version had caused no issue) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269801#msg-269801 From mdounin at mdounin.ru Fri Sep 23 18:59:33 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Sep 2016 21:59:33 +0300 Subject: upstream and proxy_pass behavior In-Reply-To: References: Message-ID: <20160923185932.GF73038@mdounin.ru> Hello! On Fri, Sep 23, 2016 at 11:38:44AM -0500, Philip Walenta wrote: [...] > I see an error of: > > *51 no resolver defined to resolve origin.9999, client: 5.6.7.8, server: > www.example.com, request: "GET /api/v1/proxy/?tid=9999 HTTP/1.1", host: " > www.example.com" - as if it only considers a DNS lookup, even though there > is an upstream server block configured. [...] > Is there a reason the first example isn't working? > > It's very valuable to me to be able to pass the port to the upstream server > as shown in the second example as it reduces the number of upstream blocks > I need by a factor of 10 or more. When using upstream groups as defined using the upstream{} directive, ports are specified on per-server basis in the "server" directives of the upstream group. If there is no port explicitly specified, it just means that port 80 will be used by default. If you'll try something like this, without variables: upstream u { server 127.0.0.1; } proxy_pass http://u:8080; then nginx will complain right during configuration parsing: nginx: [emerg] upstream "u" may not have port 8081 in ... -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Sep 24 05:51:40 2016 From: nginx-forum at forum.nginx.org (Nomad Worker) Date: Sat, 24 Sep 2016 01:51:40 -0400 Subject: Is ngx_http_v2_module ready for business application? Message-ID: <3cbe548e60e26827bdb25ab2fb6893cb.NginxMailingListEnglish@forum.nginx.org> 'The module is experimental, caveat emptor applies', it's writed on the doc of ngx_http_v2_module. However nginx already has a stable version(1.10.1) which contains the v2 module. If we use this latest stable version?s http/2 module officially, what are the risks? And I know that NGINX Plus has been fully supported for http/2. If we use the NGINX Plus instead of version 1.10.1, what are the benefits? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269806,269806#msg-269806 From anoopalias01 at gmail.com Sat Sep 24 08:58:27 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 24 Sep 2016 14:28:27 +0530 Subject: performance hit in using too many if's Message-ID: Hi, I was following some suggestions on blocking user agents,sql injections etc as in the following URL https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc Just wanted to know what is the performance hit when using so many of these if's ( in light of the if-is-evil policy ). Especially if the server is having a lot of virtual hosts and the rules are matched for each of them. Is it like: If the server is capable (beefy) it should be able to handle these URL ? or There is a huge performance penalty .Significantly more than apache+mod_security as an example or The is a performance penalty but not as much as other security tools or WAF's like naxsi or mod_security Thanks in advance, -- Anoop P Alias From lists at lazygranch.com Sat Sep 24 09:15:12 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 24 Sep 2016 02:15:12 -0700 Subject: performance hit in using too many if's In-Reply-To: References: Message-ID: <20160924091512.5468245.36485.11100@lazygranch.com> ?I suspect the map module can do that more efficiently. There is an example of how to use the map module in this post: http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html The code is certainly cleaner using map. I use three maps, specifically for ?bad user agent, bad request, and bad referrer.? ? Original Message ? From: Anoop Alias Sent: Saturday, September 24, 2016 1:58 AM To: Nginx Reply To: nginx at nginx.org Subject: performance hit in using too many if's Hi, I was following some suggestions on blocking user agents,sql injections etc as in the following URL https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc Just wanted to know what is the performance hit when using so many of these if's ( in light of the if-is-evil policy ). Especially if the server is having a lot of virtual hosts and the rules are matched for each of them. Is it like: If the server is capable (beefy) it should be able to handle these URL ? or There is a huge performance penalty .Significantly more than apache+mod_security as an example or The is a performance penalty but not as much as other security tools or WAF's like naxsi or mod_security Thanks in advance, -- Anoop P Alias _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From anoopalias01 at gmail.com Sat Sep 24 09:38:52 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 24 Sep 2016 15:08:52 +0530 Subject: performance hit in using too many if's In-Reply-To: <20160924091512.5468245.36485.11100@lazygranch.com> References: <20160924091512.5468245.36485.11100@lazygranch.com> Message-ID: I understand that the map may look cleaner on the config as each vhost don't need the if matchings ..but the variable evaluation and therefore the pattern matching for all possible values is still happening when the mapped variable in encountered? and therefore there is still a huge performance penalty ? I am mainly asking this..as the above type of security configs are mostly not seen on nginx official blogs /documentation etc . Just wanted to know if people who know the internals have purposefully omitted these setting even though they are serving the purpose of security. On Sat, Sep 24, 2016 at 2:45 PM, wrote: > ?I suspect the map module can do that more efficiently. There is an example of how to use the map module in this post: > > http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html > > The code is certainly cleaner using map. I use three maps, specifically for bad user agent, bad request, and bad referrer. > > > > Original Message > From: Anoop Alias > Sent: Saturday, September 24, 2016 1:58 AM > To: Nginx > Reply To: nginx at nginx.org > Subject: performance hit in using too many if's > > Hi, > > I was following some suggestions on blocking user agents,sql > injections etc as in the following URL > > https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc > > Just wanted to know what is the performance hit when using so many of > these if's ( in light of the if-is-evil policy ). Especially if the > server is having a lot of virtual hosts and the rules are matched for > each of them. > > Is it like: > > If the server is capable (beefy) it should be able to handle these URL ? > > or > > There is a huge performance penalty .Significantly more than > apache+mod_security as an example > > or > > The is a performance penalty but not as much as other security tools > or WAF's like naxsi or mod_security > > > Thanks in advance, > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias From lists at lazygranch.com Sat Sep 24 10:02:15 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 24 Sep 2016 03:02:15 -0700 Subject: performance hit in using too many if's In-Reply-To: References: <20160924091512.5468245.36485.11100@lazygranch.com> Message-ID: <20160924100215.5468245.47080.11103@lazygranch.com> Possibly map uses a hashing scheme to do the matches, so it could be more efficient than a series of ifs. That is something the programmers would know.? Every situation is different. I don't find the maps I use to be detrimental, especially if you are preventing further operations by the nginx. I can tell you a trimmed about a third of my network traffic by aggressively blocking scrapers and other bots. There are real savings to be had.? Returning a 404 to a bad referrer can improve your page rank as well as reduce network traffic. For instance, I 404 any referral from stumbleupon.com because I never found one to be relevant when I looked at the link. I have other referrals from knuckle head websites that I rather not be associated with, and a few that turned out to disseminate malware. One referral went to a terrorist website. Why they picked me, I don't know. The link was to nothing relevant. Just do the code and watch the system load. I think you will find you concerns are not a problem. If map bugs you, you probably wouldn't like my ipfw blocking of VPS, colos, hosting companies, etc. that have attempted to hack my website. I'm up to 14k CIDRs, but here again, you have to assume a table in IPFW is intelligently searched. The server today that you block for attempting to hack WordPress is likely to be used when the next zero day comes out. If the IP doesn't have eyeballs and it isn't the few bots you like (Google, etc.), block them. ? Original Message ? From: Anoop Alias Sent: Saturday, September 24, 2016 2:39 AM To: Nginx Reply To: nginx at nginx.org Subject: Re: performance hit in using too many if's I understand that the map may look cleaner on the config as each vhost don't need the if matchings ..but the variable evaluation and therefore the pattern matching for all possible values is still happening when the mapped variable in encountered? and therefore there is still a huge performance penalty ? I am mainly asking this..as the above type of security configs are mostly not seen on nginx official blogs /documentation etc . Just wanted to know if people who know the internals have purposefully omitted these setting even though they are serving the purpose of security. On Sat, Sep 24, 2016 at 2:45 PM, wrote: > ?I suspect the map module can do that more efficiently. There is an example of how to use the map module in this post: > > http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html > > The code is certainly cleaner using map. I use three maps, specifically for bad user agent, bad request, and bad referrer. > > > > Original Message > From: Anoop Alias > Sent: Saturday, September 24, 2016 1:58 AM > To: Nginx > Reply To: nginx at nginx.org > Subject: performance hit in using too many if's > > Hi, > > I was following some suggestions on blocking user agents,sql > injections etc as in the following URL > > https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc > > Just wanted to know what is the performance hit when using so many of > these if's ( in light of the if-is-evil policy ). Especially if the > server is having a lot of virtual hosts and the rules are matched for > each of them. > > Is it like: > > If the server is capable (beefy) it should be able to handle these URL ? > > or > > There is a huge performance penalty .Significantly more than > apache+mod_security as an example > > or > > The is a performance penalty but not as much as other security tools > or WAF's like naxsi or mod_security > > > Thanks in advance, > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Sat Sep 24 11:35:08 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sat, 24 Sep 2016 16:35:08 +0500 Subject: 302 Redirect only if node is UP !! Message-ID: Hi, Is there a way we can set NGINX to redirect only if Caching node is UP otherwise serve from origin server ? Here is more details about the scenario : We've two servers (Origin & Cache) & here is the request scenario: - client (1.1.1.1) requests a file to origin server - Origin checks if ip is coming from specific ISP(in our case YES 1.1.1.1 belongs to ISP which we want to redirect to caching node for better latency) - So Origin 302 redirect request to caching node. - client served by caching node. Now in case our ONLY single caching node goes down & nginx keeps on redirecting client to caching node, that would be really bad. So we're thinking if we could put a condition in NGINX to redirect only if caching node is UP otherwise redirect should be ignored & request should be served by Origin. We know there's also another option which we can explore i.e upstream{} but it'll proxy requests via Origin which we don't want, the only requirement is 302 the client. Need advise from experts here. Thanks in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Sat Sep 24 11:40:38 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 24 Sep 2016 04:40:38 -0700 Subject: performance hit in using too many if's In-Reply-To: <20160924100215.5468245.47080.11103@lazygranch.com> References: <20160924091512.5468245.36485.11100@lazygranch.com> <20160924100215.5468245.47080.11103@lazygranch.com> Message-ID: Pardon me, but this thread smells terribly of bikeshedding. Comparing ifs vs maps is useless when what you're trying to accomplish should never be done through an HTTP server config. It's security theater, and no, the low-hanging fruit argument does not apply here. Use a proper waf like libmodsec or naxsi and call it a day. > On Sep 24, 2016, at 03:02, lists at lazygranch.com wrote: > > Possibly map uses a hashing scheme to do the matches, so it could be more efficient than a series of ifs. That is something the programmers would know. > > Every situation is different. I don't find the maps I use to be detrimental, especially if you are preventing further operations by the nginx. I can tell you a trimmed about a third of my network traffic by aggressively blocking scrapers and other bots. There are real savings to be had. > > Returning a 404 to a bad referrer can improve your page rank as well as reduce network traffic. For instance, I 404 any referral from stumbleupon.com because I never found one to be relevant when I looked at the link. I have other referrals from knuckle head websites that I rather not be associated with, and a few that turned out to disseminate malware. One referral went to a terrorist website. Why they picked me, I don't know. The link was to nothing relevant. > > Just do the code and watch the system load. I think you will find you concerns are not a problem. > > If map bugs you, you probably wouldn't like my ipfw blocking of VPS, colos, hosting companies, etc. that have attempted to hack my website. I'm up to 14k CIDRs, but here again, you have to assume a table in IPFW is intelligently searched. The server today that you block for attempting to hack WordPress is likely to be used when the next zero day comes out. If the IP doesn't have eyeballs and it isn't the few bots you like (Google, etc.), block them. > Original Message > From: Anoop Alias > Sent: Saturday, September 24, 2016 2:39 AM > To: Nginx > Reply To: nginx at nginx.org > Subject: Re: performance hit in using too many if's > > I understand that the map may look cleaner on the config as each vhost > don't need the if matchings ..but the variable evaluation and > therefore the pattern matching for all possible values is still > happening when the mapped variable in encountered? and therefore there > is still a huge performance penalty ? > > I am mainly asking this..as the above type of security configs are > mostly not seen on nginx official blogs /documentation etc . > Just wanted to know if people who know the internals have purposefully > omitted these setting even though they are serving the purpose of > security. > > > >> On Sat, Sep 24, 2016 at 2:45 PM, wrote: >> ?I suspect the map module can do that more efficiently. There is an example of how to use the map module in this post: >> >> http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html >> >> The code is certainly cleaner using map. I use three maps, specifically for bad user agent, bad request, and bad referrer. >> >> >> >> Original Message >> From: Anoop Alias >> Sent: Saturday, September 24, 2016 1:58 AM >> To: Nginx >> Reply To: nginx at nginx.org >> Subject: performance hit in using too many if's >> >> Hi, >> >> I was following some suggestions on blocking user agents,sql >> injections etc as in the following URL >> >> https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc >> >> Just wanted to know what is the performance hit when using so many of >> these if's ( in light of the if-is-evil policy ). Especially if the >> server is having a lot of virtual hosts and the rules are matched for >> each of them. >> >> Is it like: >> >> If the server is capable (beefy) it should be able to handle these URL ? >> >> or >> >> There is a huge performance penalty .Significantly more than >> apache+mod_security as an example >> >> or >> >> The is a performance penalty but not as much as other security tools >> or WAF's like naxsi or mod_security >> >> >> Thanks in advance, >> >> -- >> Anoop P Alias >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Sat Sep 24 14:08:41 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 24 Sep 2016 07:08:41 -0700 Subject: performance hit in using too many if's In-Reply-To: References: <20160924091512.5468245.36485.11100@lazygranch.com> <20160924100215.5468245.47080.11103@lazygranch.com> Message-ID: <20160924140841.5468245.63576.11114@lazygranch.com> I had too many false positives with Naxsi and debugging is difficult. In any event, using Naxsi doesn't eliminate the need to block bad referrals, so you still need the map module. ? I have passed tinfoilsecurity.com flogging, as well as one of the transversal testers. So this is more than just security theater.? I flag all the hackers with a 444, then use scripts to display the 444 log entries in full line and also just a list of IPs. If I see a ridiculous number of attacks from one IP, it gets blocked even if an ISP. Otherwise I just examine the list of IPs and if it lacks eyeballs, I block the entire IP space of the entity. This eliminates having to look at the same infected servers or bulletproof hosting every time I check the logs. ? Original Message ? From: Robert Paprocki Sent: Saturday, September 24, 2016 4:41 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: performance hit in using too many if's Pardon me, but this thread smells terribly of bikeshedding. Comparing ifs vs maps is useless when what you're trying to accomplish should never be done through an HTTP server config. It's security theater, and no, the low-hanging fruit argument does not apply here. Use a proper waf like libmodsec or naxsi and call it a day. > On Sep 24, 2016, at 03:02, lists at lazygranch.com wrote: > > Possibly map uses a hashing scheme to do the matches, so it could be more efficient than a series of ifs. That is something the programmers would know. > > Every situation is different. I don't find the maps I use to be detrimental, especially if you are preventing further operations by the nginx. I can tell you a trimmed about a third of my network traffic by aggressively blocking scrapers and other bots. There are real savings to be had. > > Returning a 404 to a bad referrer can improve your page rank as well as reduce network traffic. For instance, I 404 any referral from stumbleupon.com because I never found one to be relevant when I looked at the link. I have other referrals from knuckle head websites that I rather not be associated with, and a few that turned out to disseminate malware. One referral went to a terrorist website. Why they picked me, I don't know. The link was to nothing relevant. > > Just do the code and watch the system load. I think you will find you concerns are not a problem. > > If map bugs you, you probably wouldn't like my ipfw blocking of VPS, colos, hosting companies, etc. that have attempted to hack my website. I'm up to 14k CIDRs, but here again, you have to assume a table in IPFW is intelligently searched. The server today that you block for attempting to hack WordPress is likely to be used when the next zero day comes out. If the IP doesn't have eyeballs and it isn't the few bots you like (Google, etc.), block them. > Original Message > From: Anoop Alias > Sent: Saturday, September 24, 2016 2:39 AM > To: Nginx > Reply To: nginx at nginx.org > Subject: Re: performance hit in using too many if's > > I understand that the map may look cleaner on the config as each vhost > don't need the if matchings ..but the variable evaluation and > therefore the pattern matching for all possible values is still > happening when the mapped variable in encountered? and therefore there > is still a huge performance penalty ? > > I am mainly asking this..as the above type of security configs are > mostly not seen on nginx official blogs /documentation etc . > Just wanted to know if people who know the internals have purposefully > omitted these setting even though they are serving the purpose of > security. > > > >> On Sat, Sep 24, 2016 at 2:45 PM, wrote: >> ?I suspect the map module can do that more efficiently. There is an example of how to use the map module in this post: >> >> http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html >> >> The code is certainly cleaner using map. I use three maps, specifically for bad user agent, bad request, and bad referrer. >> >> >> >> Original Message >> From: Anoop Alias >> Sent: Saturday, September 24, 2016 1:58 AM >> To: Nginx >> Reply To: nginx at nginx.org >> Subject: performance hit in using too many if's >> >> Hi, >> >> I was following some suggestions on blocking user agents,sql >> injections etc as in the following URL >> >> https://www.howtoforge.com/nginx-how-to-block-exploits-sql-injections-file-injections-spam-user-agents-etc >> >> Just wanted to know what is the performance hit when using so many of >> these if's ( in light of the if-is-evil policy ). Especially if the >> server is having a lot of virtual hosts and the rules are matched for >> each of them. >> >> Is it like: >> >> If the server is capable (beefy) it should be able to handle these URL ? >> >> or >> >> There is a huge performance penalty .Significantly more than >> apache+mod_security as an example >> >> or >> >> The is a performance penalty but not as much as other security tools >> or WAF's like naxsi or mod_security >> >> >> Thanks in advance, >> >> -- >> Anoop P Alias >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Sep 24 18:10:58 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Sat, 24 Sep 2016 14:10:58 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: <50327040a3598b86d8538f719f706c05.NginxMailingListEnglish@forum.nginx.org> Maxim, After 29 hours error re-appeared jus tonce, which is much less than before I see a correlation on my monit system at this exact time : apache traffic had a peak, which equals to a big download peak I'm now thinking to nginx tweaks i have not done yet I now enlarge from 64m to client_max_body_size 256m; >From 256k client_body_buffer_size 512k; add : send_timeout 300s; This one could be issue : sendfile_max_chunk 512k; I put it to 0 This is too large too directio 50m; I put it as : directio 4m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269816#msg-269816 From emailgrant at gmail.com Sun Sep 25 00:50:54 2016 From: emailgrant at gmail.com (Grant) Date: Sat, 24 Sep 2016 17:50:54 -0700 Subject: nginx reverse proxy causing TCP queuing spikes In-Reply-To: References: Message-ID: > I've been struggling with http response time slowdowns and > corresponding spikes in my TCP Queuing graph in munin. I'm using > nginx as a reverse proxy to apache which then hands off to my backend, > and I think the proxy_read_timeout line in my nginx config is at least > contributing to the issue. Here is all of my proxy config: > > proxy_read_timeout 60m; > proxy_pass http://127.0.0.1:8080; > > I think this means I'm leaving connections open for 60 minutes after > the last server response which sounds like a bad thing. However, some > of my admin pages need to run for a long time while they wait for the > server-side stuff to execute. I only use the proxy_read_timeout > directive on my admin locations and I'm experiencing the TCP spikes > and http slowdowns during the exact hours that the admin stuff is in > use. It turns out this issue was due to Odoo which also runs behind nginx in a reverse proxy configuration on my machine. Has anyone else had that kind of trouble with Odoo? - Grant From emailgrant at gmail.com Sun Sep 25 00:56:42 2016 From: emailgrant at gmail.com (Grant) Date: Sat, 24 Sep 2016 17:56:42 -0700 Subject: limit-req and greedy UAs In-Reply-To: References: <20160909013940.5501012.10243.10085@lazygranch.com> <20160909163036.5501012.8924.10125@lazygranch.com> <20160911152141.5484628.98176.10223@lazygranch.com> <20160911191606.5484628.46851.10233@lazygranch.com> Message-ID: > limit_req works with multiple connections, it is usually configured per IP > using $binary_remote_addr. See > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone > - you can use variables to set the key to whatever you like. > > limit_req generally helps protect eg your backend against request floods > from a single IP and any amount of connections. limit_conn protects against > excessive connections tying up resources on the webserver itself. I'm suspicious that Odoo which runs behind nginx in a reverse proxy config could be creating too many connections or something similar and bogging things down for my main site which runs in apache2 behind nginx as well. Is there a good way to find out? Stopping the Odoo daemon certainly kills the problem instantly. - Grant From emailgrant at gmail.com Sun Sep 25 01:10:38 2016 From: emailgrant at gmail.com (Grant) Date: Sat, 24 Sep 2016 18:10:38 -0700 Subject: Speed up initial connection Message-ID: Is there anything I can do to speed up the initial connection? It seems like the first page of my site I hit is consistently slower to respond than all subsequent requests. This is the case even when my backend session is still valid and unexpired for that initial request. Is 'multi_accept on;' a good idea? - Grant From steve at greengecko.co.nz Sun Sep 25 07:29:29 2016 From: steve at greengecko.co.nz (steve) Date: Sun, 25 Sep 2016 20:29:29 +1300 Subject: Speed up initial connection In-Reply-To: References: Message-ID: <5a33993e-06c2-ecbf-227e-f38d00e15842@greengecko.co.nz> Hi, On 25/09/16 14:10, Grant wrote: > Is there anything I can do to speed up the initial connection? It > seems like the first page of my site I hit is consistently slower to > respond than all subsequent requests. This is the case even when my > backend session is still valid and unexpired for that initial request. > Is 'multi_accept on;' a good idea? > > - Grant If you use something like webpagetest.org, you can break down the connection. You may see that it's DNS, server side processing of maybe something local?? From francis at daoine.org Sun Sep 25 08:35:53 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 25 Sep 2016 09:35:53 +0100 Subject: listen proxy_protocol and rewrite redirect scheme In-Reply-To: References: <20160921164301.GV11677@daoine.org> Message-ID: <20160925083553.GZ11677@daoine.org> On Thu, Sep 22, 2016 at 07:57:17AM -0400, adrhc wrote: Hi there, > I'm just a bit surprised that "port_in_redirect off" does not also > work. But that's ok -- I'm often surprised. > There's a "if" in src/http/ngx_http_header_filter_module.c which changes > port's value from 443 to 0 when on ssl + port initially 443 so > https://adrhc.go.ro/ffp_0.7_armv5 would redirect to http when > port_in_redirect is off. Ah, right, that makes sense. As it happens, that is only necessary because your extra patch cares about when port=443. Potentially, a fuller solution to the "use https redirects even though this is http" question would not care about "port", and so "port_in_redirect" would not matter then. But as I said: what you have works for you, and is therefore good as-is. > "... but I don't know what is the set of conditions under which you would > want this ssl-rewrite to happen, and how you would go about configuring > that." > I'm not sure I understand what you mean (my bad english); the entire setup > is one allowing me to access my home server through the corporate firewall > wile not breaking what I already have (my web sites): My intention was: *if* there were to be some directive or variable in nginx that could be set to get nginx to use https redirects even though nginx believes that the connection is over http; *then* how and where would that directive or variable be set? Until the "then" has a clear answer, the "if" will not happen. But also: it does not matter right now. You have an adequate solution for you; if someone else has the same problem and wants a fuller solution, they can worry about it then. > "It looks like nobody else has had that particular use case ..." > This seems odd for me; I'm sure I'm not the only guy starving for open ports > to internet (only 80 and 443 allowed) :D Possibly other people came up with different solutions, or did not use nginx in the same way that you are using it. Anyway - it is good that you found a solution, and thanks for having shared it. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Sep 25 09:46:46 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Sun, 25 Sep 2016 05:46:46 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: <31b8cd32b716454f82dd0fcbcf288247.NginxMailingListEnglish@forum.nginx.org> I confirm we still not escaped with error which appeared just now : 2016/09/25 09:22:15 [emerg] 461680#461680: posix_memalign(16, 16384) failed (12: Cannot allocate memory) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269824#msg-269824 From reallfqq-nginx at yahoo.fr Sun Sep 25 13:58:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 25 Sep 2016 15:58:20 +0200 Subject: nginx reverse proxy causing TCP queuing spikes In-Reply-To: References: Message-ID: It is most probably a question more suitable to some Odoo ML. --- *B. R.* On Sun, Sep 25, 2016 at 2:50 AM, Grant wrote: > > I've been struggling with http response time slowdowns and > > corresponding spikes in my TCP Queuing graph in munin. I'm using > > nginx as a reverse proxy to apache which then hands off to my backend, > > and I think the proxy_read_timeout line in my nginx config is at least > > contributing to the issue. Here is all of my proxy config: > > > > proxy_read_timeout 60m; > > proxy_pass http://127.0.0.1:8080; > > > > I think this means I'm leaving connections open for 60 minutes after > > the last server response which sounds like a bad thing. However, some > > of my admin pages need to run for a long time while they wait for the > > server-side stuff to execute. I only use the proxy_read_timeout > > directive on my admin locations and I'm experiencing the TCP spikes > > and http slowdowns during the exact hours that the admin stuff is in > > use. > > > It turns out this issue was due to Odoo which also runs behind nginx > in a reverse proxy configuration on my machine. Has anyone else had > that kind of trouble with Odoo? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Sep 25 20:38:30 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 25 Sep 2016 16:38:30 -0400 Subject: Nginx Serving Large Static Files >=2GB Message-ID: <9e91692b4f8342e8890f9b1a7751c6b0.NginxMailingListEnglish@forum.nginx.org> So I want to find the best optimal settings for serving large static files with Nginx. >=2GB I read that "output_buffers" is the key. Would also like to know if it should be defined per location {} that the static file is served from or across the entire server via http {} and any other settings that should be in place or left at defaults. Also curious if any of this would even apply to us who use Nginx on Windows | http://nginx-win.ecsds.eu/ I did read a old email by Maxim Dounin here : http://mailman.nginx.org/pipermail/nginx/2012-May/033761.html Also mentions output_buffers should be increased but if it should be just per location or http and what correct value it should be increased to based of the size of the file you are serving is what I'd like to know :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269834,269834#msg-269834 From lists at lazygranch.com Sun Sep 25 21:58:33 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 25 Sep 2016 14:58:33 -0700 Subject: fake googlebots Message-ID: <20160925145833.076ee55e@linux-h57q.site> I got a spoofed googlebot hit. It was easy to detect since there were probably a hundred requests that triggered my hacker detection map scheme. Only two requests received a 200 return and both were harmless. 200 118.193.176.53 - - [25/Sep/2016:17:45:23 +0000] "GET / HTTP/1.1" 847 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-" For the fake googlebot: # host 118.193.176.53 Host 53.176.193.118.in-addr.arpa not found: 3(NXDOMAIN) For a real googlebot: # host 66.249.69.184 184.69.249.66.in-addr.arpa domain name pointer crawl-66-249-69-184.googlebot.com. IP2location shows it is a Chinese ISP: 3(NXDOMAIN)http://www.ip2location.com/118.193.176.53 Nginx has a reverse DNS module: https://github.com/flant/nginx-http-rdns I see it has a 10.1 issue: https://github.com/flant/nginx-http-rdns/issues/8 Presuming this bug gets fixed, does anyone have code to verify googlebots? Or some other method? From wandenberg at gmail.com Sun Sep 25 22:04:45 2016 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 26 Sep 2016 00:04:45 +0200 Subject: fake googlebots In-Reply-To: <20160925145833.076ee55e@linux-h57q.site> References: <20160925145833.076ee55e@linux-h57q.site> Message-ID: Some time ago I wrote this module to check when an access is done through the Google Proxy using reverse DNS + DNS resolve and comparing the results to validate the access. You can do something similar. On Sun, Sep 25, 2016 at 11:58 PM, lists at lazygranch.com wrote: > I got a spoofed googlebot hit. It was easy to detect since there were > probably a hundred requests that triggered my hacker detection map > scheme. Only two requests received a 200 return and both were harmless. > > 200 118.193.176.53 - - [25/Sep/2016:17:45:23 +0000] "GET / HTTP/1.1" 847 > "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot. > html)" "-" > > For the fake googlebot: > # host 118.193.176.53 > Host 53.176.193.118.in-addr.arpa not found: 3(NXDOMAIN) > > For a real googlebot: > # host 66.249.69.184 > 184.69.249.66.in-addr.arpa domain name pointer > crawl-66-249-69-184.googlebot.com. > > IP2location shows it is a Chinese ISP: > 3(NXDOMAIN)http://www.ip2location.com/118.193.176.53 > > Nginx has a reverse DNS module: > https://github.com/flant/nginx-http-rdns > I see it has a 10.1 issue: > https://github.com/flant/nginx-http-rdns/issues/8 > > Presuming this bug gets fixed, does anyone have code to verify > googlebots? Or some other method? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Sun Sep 25 22:06:31 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Mon, 26 Sep 2016 00:06:31 +0200 Subject: fake googlebots In-Reply-To: <20160925145833.076ee55e@linux-h57q.site> References: <20160925145833.076ee55e@linux-h57q.site> Message-ID: <3323F9EA-CC3B-4F67-99A1-FAF68CDBA789@ultra-secure.de> > Am 25.09.2016 um 23:58 schrieb lists at lazygranch.com: > > I got a spoofed googlebot hit. It was easy to detect since there were > probably a hundred requests that triggered my hacker detection map > scheme. Only two requests received a 200 return and both were harmless. > > 200 118.193.176.53 - - [25/Sep/2016:17:45:23 +0000] "GET / HTTP/1.1" 847 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-" > > For the fake googlebot: > # host 118.193.176.53 > Host 53.176.193.118.in-addr.arpa not found: 3(NXDOMAIN) > > For a real googlebot: > # host 66.249.69.184 > 184.69.249.66.in-addr.arpa domain name pointer crawl-66-249-69-184.googlebot.com. > > IP2location shows it is a Chinese ISP: > 3(NXDOMAIN)http://www.ip2location.com/118.193.176.53 > > Nginx has a reverse DNS module: > https://github.com/flant/nginx-http-rdns > I see it has a 10.1 issue: > https://github.com/flant/nginx-http-rdns/issues/8 > > Presuming this bug gets fixed, does anyone have code to verify > googlebots? Or some other method? Sorry to be so blunt - but what?s the point? You can also password-protect your site and give the credentials only to your friends. Problem solved. Most of the traffic of the web these days is created by bots (unless you?re a popular shop or offer original, often updated content for popular topics, then you?ll actually get visitors). If it?s not the Big G, it might be bing or baidu or yandex or some other bot. From lists at lazygranch.com Sun Sep 25 22:25:32 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 25 Sep 2016 15:25:32 -0700 Subject: fake googlebots In-Reply-To: References: <20160925145833.076ee55e@linux-h57q.site> Message-ID: <20160925222532.5468245.72003.11173@lazygranch.com> An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Sun Sep 25 22:29:32 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 25 Sep 2016 15:29:32 -0700 Subject: fake googlebots In-Reply-To: <20160925222532.5468245.72003.11173@lazygranch.com> References: <20160925145833.076ee55e@linux-h57q.site> <20160925222532.5468245.72003.11173@lazygranch.com> Message-ID: <9C6C29DC-2F09-4887-9418-CBB73DF37E83@fearnothingproductions.net> > That hacker was quite insistent. I got a 414 (large request) for the first time. Perhaps a buffer overflow attempt. In 2016? I _strongly_ doubt it. ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at neverlandusercontent.com Mon Sep 26 03:00:40 2016 From: mailinglist at neverlandusercontent.com (dommyet) Date: Mon, 26 Sep 2016 11:00:40 +0800 Subject: fake googlebots In-Reply-To: <20160925145833.076ee55e@linux-h57q.site> References: <20160925145833.076ee55e@linux-h57q.site> Message-ID: <1b4937da-fcc7-42b8-9567-c4fd0619eed0@neverlandusercontent.com> IP2location's data is not accurate in China. This IP is located in Hong Kong instead of Shanghai, however it does belong to a IDC registered in Shanghai named 51idc.com. It is just a (misconfigurated) proxy server and somebody abused it, ban the address in iptables should work. On 9/26/2016 05:58, lists at lazygranch.com wrote: > > 200 118.193.176.53 - - [25/Sep/2016:17:45:23 +0000] "GET / HTTP/1.1" 847 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-" > > For the fake googlebot: > # host 118.193.176.53 > Host 53.176.193.118.in-addr.arpa not found: 3(NXDOMAIN) > > IP2location shows it is a Chinese ISP: > 3(NXDOMAIN)http://www.ip2location.com/118.193.176.53 > > From nginx-forum at forum.nginx.org Mon Sep 26 07:28:42 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Mon, 26 Sep 2016 03:28:42 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: <37ecf348e2bd11ecdab0f40e9a590bce.NginxMailingListEnglish@forum.nginx.org> No error after 24 hours now, nginx version without modules was 1 part of the solution Now I tweak aio nginx with this : directio_alignment 4k; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269843#msg-269843 From sca at andreasschulze.de Mon Sep 26 07:32:59 2016 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 26 Sep 2016 09:32:59 +0200 Subject: fake googlebots / nginx-http-rdns In-Reply-To: <20160925145833.076ee55e@linux-h57q.site> Message-ID: <20160926093259.Horde.mgo4L1A1FlVEpMXKoiWjNEo@andreasschulze.de> lists: > Nginx has a reverse DNS module: > https://github.com/flant/nginx-http-rdns for an older version from 20140411 I have a patch. That version works without problems. --- nginx-1.10.1.orig/nginx-http-rdns-20140411/ngx_http_rdns_module.c +++ nginx-1.10.1/nginx-http-rdns-20140411/ngx_http_rdns_module.c @@ -214,7 +214,7 @@ static char * merge_loc_conf(ngx_conf_t } #endif - if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || (core_loc_cf->resolver->udp_connections.nelts == 0))) { + if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || (core_loc_cf->resolver->connections.nelts == 0))) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no core resolver defined for rdns"); return NGX_CONF_ERROR; } > I see it has a 10.1 issue: > https://github.com/flant/nginx-http-rdns/issues/8 not sure if my patch addresses that issue... Andreas From lists at lazygranch.com Mon Sep 26 07:55:26 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 26 Sep 2016 00:55:26 -0700 Subject: fake googlebots / nginx-http-rdns In-Reply-To: <20160926093259.Horde.mgo4L1A1FlVEpMXKoiWjNEo@andreasschulze.de> References: <20160925145833.076ee55e@linux-h57q.site> <20160926093259.Horde.mgo4L1A1FlVEpMXKoiWjNEo@andreasschulze.de> Message-ID: <20160926075526.5468245.79323.11192@lazygranch.com> I doubt I could patch source. (I know my limits.) But reverse DNS seems very useful. Someone should fix the module. ? Original Message ? From: A. Schulze Sent: Monday, September 26, 2016 12:33 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: fake googlebots / nginx-http-rdns lists: > Nginx has a reverse DNS module: > https://github.com/flant/nginx-http-rdns for an older version from 20140411 I have a patch. That version works without problems. --- nginx-1.10.1.orig/nginx-http-rdns-20140411/ngx_http_rdns_module.c +++ nginx-1.10.1/nginx-http-rdns-20140411/ngx_http_rdns_module.c @@ -214,7 +214,7 @@ static char * merge_loc_conf(ngx_conf_t } #endif - if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || (core_loc_cf->resolver->udp_connections.nelts == 0))) { + if (conf->conf.enabled && ((core_loc_cf->resolver == NULL) || (core_loc_cf->resolver->connections.nelts == 0))) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no core resolver defined for rdns"); return NGX_CONF_ERROR; } > I see it has a 10.1 issue: > https://github.com/flant/nginx-http-rdns/issues/8 not sure if my patch addresses that issue... Andreas _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Sep 26 08:24:00 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Mon, 26 Sep 2016 04:24:00 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: <8563ad4a9017164c500e9d1155c356c6.NginxMailingListEnglish@forum.nginx.org> just now 2016/09/26 10:18:52 [emerg] 5027#5027: malloc(4096) failed (12: Cannot allocate memory) 2016/09/26 10:18:53 [emerg] 5043#5043: malloc(4096) failed (12: Cannot allocate memory) 2016/09/26 10:18:54 [emerg] 5048#5048: malloc(4096) failed (12: Cannot allocate memory) 2016/09/26 10:18:54 [emerg] 5066#5066: malloc(4096) failed (12: Cannot allocate memory) 2016/09/26 10:18:55 [emerg] 5076#5076: malloc(4096) failed (12: Cannot allocate memory) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269846#msg-269846 From nginx-forum at forum.nginx.org Mon Sep 26 08:27:32 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Mon, 26 Sep 2016 04:27:32 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <8563ad4a9017164c500e9d1155c356c6.NginxMailingListEnglish@forum.nginx.org> References: <20160923160728.GB73038@mdounin.ru> <8563ad4a9017164c500e9d1155c356c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <76bc7a60551acd43be841b44c31b4628.NginxMailingListEnglish@forum.nginx.org> also : 2016/09/26 10:26:22 [emerg] 14146#14146: posix_memalign(16, 16384) failed (12: Cannot allocate memory) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269847#msg-269847 From nginx-forum at forum.nginx.org Mon Sep 26 08:43:23 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Mon, 26 Sep 2016 04:43:23 -0400 Subject: performance hit in using too many if's In-Reply-To: <20160924140841.5468245.63576.11114@lazygranch.com> References: <20160924140841.5468245.63576.11114@lazygranch.com> Message-ID: <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> Hello, I don't agree with Robert Paprocki: adding modules like naxsi or modsecurity to nginx is not a solution. They have bugs, performance hits, need patch when there's new versions of nginx,... gariac, you say you send 444 to hackers then use a script to display those. Why not use fail2ban to scan the logs and ban them for some time. But of course, fail2ban could also be a performance hit if you have tons of logs to scan :-( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269808,269848#msg-269848 From lists at lazygranch.com Mon Sep 26 10:09:23 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 26 Sep 2016 03:09:23 -0700 Subject: performance hit in using too many if's In-Reply-To: <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> References: <20160924140841.5468245.63576.11114@lazygranch.com> <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160926100923.5468245.15989.11206@lazygranch.com> For one thing, I have trouble making fail2ban work. ;-) ?I run sshguard, so the major port 22 hacking is covered. And that is continous. I don't know if fail2ban can read nginx logs. I thought you need to run swatch, which requires actual perl skill to set up. In any event, my 444 is harmless other than someone not getting a reply. I find hackers try to log into WordPress. I find Google trys to log into WordPress. My guess is maybe Google is trying to figure out if you run WordPress, while the hackers would dictionary search if you were actually running WordPress. In my case, I am not running WordPress, but anyone trying to log into it is suspicious. Blocking Google is bad.? So I examine the IP addresses. If from a colo, VPS, etc. , they get a lifetime ban of the entire IP space. No eyeballs there, or if a VPN, they can just drop it. If the IP goes back to some ISP or occasionally Google, I figure who cares. WordPress isn't my only trigger. I've learned the words like the Chinese use for backup, which they search for. Of course "backup" is searched as well. I have maybe 30 triggers in the map. I also limit my verbs to "get" and "head" since I only serve static pages. Ask for php, you get 444. Use wget, curl, nutch, etc., get a 444. The bad referrals get a 404. Since whatever I consider to be hacking is blocked in real time, no problem to the server. I then use the scripts to look at the IPs I deem shady and see who they are. The list is like four or so unique IP addresses a day. Most go to ISPs, often mobile. So I just live with it. If I find a commercial site, I block the hosting company associated with that commercial site. When I ran Naxsi, it would trigger on words like update. I had to change all URLs with the word update in them to a non reserved word. Some triggers I couldn't even figure out. Thus I determined using the map modules and my own triggers to be a better plan. ? Original Message ? From: Alt Sent: Monday, September 26, 2016 1:43 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: performance hit in using too many if's Hello, I don't agree with Robert Paprocki: adding modules like naxsi or modsecurity to nginx is not a solution. They have bugs, performance hits, need patch when there's new versions of nginx,... gariac, you say you send 444 to hackers then use a script to display those. Why not use fail2ban to scan the logs and ban them for some time. But of course, fail2ban could also be a performance hit if you have tons of logs to scan :-( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269808,269848#msg-269848 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Sep 26 10:42:58 2016 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Mon, 26 Sep 2016 06:42:58 -0400 Subject: posix_memalign(16, 16384) failed (12: Cannot allocate memory) In-Reply-To: <20160923160728.GB73038@mdounin.ru> References: <20160923160728.GB73038@mdounin.ru> Message-ID: Im now testing what said sys nginx : worker_processes 1; Try 1 first and the error is fixed you can increase it to 4 or 8 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269787,269850#msg-269850 From anoopalias01 at gmail.com Mon Sep 26 11:28:10 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 26 Sep 2016 16:58:10 +0530 Subject: performance hit in using too many if's In-Reply-To: <20160926100923.5468245.15989.11206@lazygranch.com> References: <20160924140841.5468245.63576.11114@lazygranch.com> <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> <20160926100923.5468245.15989.11206@lazygranch.com> Message-ID: Ok .. reiterating my original question. Is the usage of if / map in nginx config more efficient than say naxsi ( or libmodsecurity ) for something like blocking SQL injection ? For example, https://github.com/nbs-system/naxsi/blob/master/naxsi_config/naxsi_core.rules rules 1000-1099 - blockes sql injection attempt So ..do (to a limited extent ) ## Block SQL injections set $block_sql_injections 0; if ($query_string ~ "union.*select.*\(") { set $block_sql_injections 1; ............ ..................... if ($block_file_injections = 1) { return 403; } >From the point of application performance which one is better .. ? Performance for a shared hosting server with around 500 vhosts. On Mon, Sep 26, 2016 at 3:39 PM, wrote: > For one thing, I have trouble making fail2ban work. ;-) I run sshguard, > so the major port 22 hacking is covered. And that is continous. > > I don't know if fail2ban can read nginx logs. I thought you need to run > swatch, which requires actual perl skill to set up. > > In any event, my 444 is harmless other than someone not getting a reply. I > find hackers try to log into WordPress. I find Google trys to log into > WordPress. My guess is maybe Google is trying to figure out if you run > WordPress, while the hackers would dictionary search if you were actually > running WordPress. In my case, I am not running WordPress, but anyone > trying to log into it is suspicious. Blocking Google is bad. > > So I examine the IP addresses. If from a colo, VPS, etc. , they get a > lifetime ban of the entire IP space. No eyeballs there, or if a VPN, they > can just drop it. If the IP goes back to some ISP or occasionally Google, I > figure who cares. > > WordPress isn't my only trigger. I've learned the words like the Chinese > use for backup, which they search for. Of course "backup" is searched as > well. I have maybe 30 triggers in the map. I also limit my verbs to "get" > and "head" since I only serve static pages. Ask for php, you get 444. Use > wget, curl, nutch, etc., get a 444. The bad referrals get a 404. > > Since whatever I consider to be hacking is blocked in real time, no > problem to the server. I then use the scripts to look at the IPs I deem > shady and see who they are. The list is like four or so unique IP addresses > a day. Most go to ISPs, often mobile. So I just live with it. If I find a > commercial site, I block the hosting company associated with that > commercial site. > > When I ran Naxsi, it would trigger on words like update. I had to change > all URLs with the word update in them to a non reserved word. Some triggers > I couldn't even figure out. Thus I determined using the map modules and my > own triggers to be a better plan. > > Original Message > From: Alt > Sent: Monday, September 26, 2016 1:43 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: performance hit in using too many if's > > Hello, > > I don't agree with Robert Paprocki: adding modules like naxsi or > modsecurity > to nginx is not a solution. They have bugs, performance hits, need patch > when there's new versions of nginx,... > > gariac, you say you send 444 to hackers then use a script to display those. > Why not use fail2ban to scan the logs and ban them for some time. But of > course, fail2ban could also be a performance hit if you have tons of logs > to > scan :-( > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269808,269848#msg-269848 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Sep 26 15:16:53 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 26 Sep 2016 08:16:53 -0700 Subject: performance hit in using too many if's In-Reply-To: References: <20160924140841.5468245.63576.11114@lazygranch.com> <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> <20160926100923.5468245.15989.11206@lazygranch.com> Message-ID: <20160926151653.5468245.34621.11225@lazygranch.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Sep 26 16:10:00 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 26 Sep 2016 12:10:00 -0400 Subject: performance hit in using too many if's In-Reply-To: References: Message-ID: <594082b2b51b7e4c625f45015dc1e53e.NginxMailingListEnglish@forum.nginx.org> Anoop Alias Wrote: ------------------------------------------------------- > Ok .. reiterating my original question. > > Is the usage of if / map in nginx config more efficient than say > naxsi ( > or libmodsecurity ) for something like blocking SQL injection ? > > For example, > https://github.com/nbs-system/naxsi/blob/master/naxsi_config/naxsi_cor > e.rules > rules 1000-1099 - blockes sql injection attempt > > So ..do (to a limited extent ) > > ## Block SQL injections > set $block_sql_injections 0; > if ($query_string ~ "union.*select.*\(") { > set $block_sql_injections 1; > ............ > ..................... > if ($block_file_injections = 1) { > return 403; > } > > > > From the point of application performance which one is better .. ? > Performance for a shared hosting server with around 500 vhosts. I would advise if your application is vulnerable to use Naxsi because it can intercept post requests the example you provided is "$query_string" (intercepts the URL) For example : http://*.com/index.php?id=10 UNION SELECT 1,null,null-- I don't think Nginx has a way to read POST data other than the WAF methods like Naxsi ModSecurity etc. https://www.owasp.org/index.php/Testing_for_SQL_Injection_(OTG-INPVAL-005)#URL_Encoding Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269808,269857#msg-269857 From rpaprocki at fearnothingproductions.net Mon Sep 26 17:17:12 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 26 Sep 2016 10:17:12 -0700 Subject: performance hit in using too many if's In-Reply-To: References: <20160924140841.5468245.63576.11114@lazygranch.com> <05aa62928f62c8ebe62be6d9f7de8c55.NginxMailingListEnglish@forum.nginx.org> <20160926100923.5468245.15989.11206@lazygranch.com> Message-ID: On Mon, Sep 26, 2016 at 4:28 AM, Anoop Alias wrote: > Ok .. reiterating my original question. > > Is the usage of if / map in nginx config more efficient than say naxsi ( > or libmodsecurity ) for something like blocking SQL injection ? > Strictly speaking, and barring performance costs of the regexes themselves using only if/map directives in place of a full-featured WAF would likely be more less expensive, because any decent WAF will do more than just a single regular expression. That doesn't make this a better solution, though. > For example, https://github.com/nbs-system/naxsi/blob/master/nax > si_config/naxsi_core.rules > rules 1000-1099 - blockes sql injection attempt > > So ..do (to a limited extent ) > > ## Block SQL injections > set $block_sql_injections 0; > if ($query_string ~ "union.*select.*\(") { > set $block_sql_injections 1; > ............ > Using multiple .* patterns like this is pretty bad form. It doesn't lead to _catastrophic_ backtracking, but there are certainly much smarter and cheaper ways to accomplish this, particularly with larger input sets. Beyond this, checking like this doesn't allow you to examine request body data or arbitrary headers, which seems like a very poor approach. ..................... > if ($block_file_injections = 1) { > return 403; > } > > Using a simple return 403 here, without any logging or debug/audit information, could make it very very difficult to track down false positives and issues with your user base. > From the point of application performance which one is better .. ? > Performance for a shared hosting server with around 500 vhosts. > This smells very much like premature optimization. If you are truly concerned with securing this many sites, adopting a more feature solution should be the goal. If you are this truly focused on squeezing out every bit of performance as possible, using such a large hammer with generic regexes and hundreds of if/map blocks seems like the wrong road to take. There is a reason that there is no good community solution for a WAF replacement in vanilla Nginx config syntax. It's simply not a good idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Sep 26 18:58:25 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 26 Sep 2016 11:58:25 -0700 Subject: performance hit in using too many if's In-Reply-To: <594082b2b51b7e4c625f45015dc1e53e.NginxMailingListEnglish@forum.nginx.org> References: <594082b2b51b7e4c625f45015dc1e53e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160926185825.5468245.52387.11240@lazygranch.com> You might want to check out tinfoilsecurity.com to evaluate Naxsi. Microsoft uses them for azure. ?I pass all their tests.? As I stated a few times, I only serve static pages. I can get away with homebrew hacking detection. But I think you are kidding yourself if you think a stack of WAF rules isn't a CPU burden. ? There is no free lunch.? Someone supporting 500 vhosts probably should segregate the hosts regarding if they use SQL or not. You can use different "servers" in the nginx.conf for the plain and SQL enabled.? I wouldn't want the task of handling all the false positives Naxsi will generate. I think a site that needs a WAF should just go colo or VPS.? One of the reasons I see so few hackers is I have built a database of CIDRs to block. I don't get repeat offenders. But you can't have one list for many different users unless they accept your opinion of who to block. ?Probably a RBL is better. I only use RBLs for email since they do leak information and slow down response. Slow response isn't a big deal for email, but does matter for web hosting. ? Original Message ? From: c0nw0nk Sent: Monday, September 26, 2016 9:10 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: performance hit in using too many if's Anoop Alias Wrote: ------------------------------------------------------- > Ok .. reiterating my original question. > > Is the usage of if / map in nginx config more efficient than say > naxsi ( > or libmodsecurity ) for something like blocking SQL injection ? > > For example, > https://github.com/nbs-system/naxsi/blob/master/naxsi_config/naxsi_cor > e.rules > rules 1000-1099 - blockes sql injection attempt > > So ..do (to a limited extent ) > > ## Block SQL injections > set $block_sql_injections 0; > if ($query_string ~ "union.*select.*\(") { > set $block_sql_injections 1; > ............ > ..................... > if ($block_file_injections = 1) { > return 403; > } > > > > From the point of application performance which one is better .. ? > Performance for a shared hosting server with around 500 vhosts. I would advise if your application is vulnerable to use Naxsi because it can intercept post requests the example you provided is "$query_string" (intercepts the URL) For example : http://*.com/index.php?id=10 UNION SELECT 1,null,null-- I don't think Nginx has a way to read POST data other than the WAF methods like Naxsi ModSecurity etc. https://www.owasp.org/index.php/Testing_for_SQL_Injection_(OTG-INPVAL-005)#URL_Encoding Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269808,269857#msg-269857 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Sep 26 23:41:12 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 26 Sep 2016 19:41:12 -0400 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} Message-ID: <6f2dc7facedab30af711a1ed60e7adf1.NginxMailingListEnglish@forum.nginx.org> So to prevent flooding / spam by bots especially since some bots are just brutal when they crawl by within milliseconds jumping to every single page they can get. I am going to apply limit's to my PHP block limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; limit_conn_zone $binary_remote_addr zone=addr:10m; location ~ \.php$ { limit_req zone=one burst=5; limit_conn addr 10; } But in applying these limits on all PHP pages will that have bad repercussions on Google/Bing/Baidu/Yandex etc I don't want them flooding me with requests either but at the same time I don't want them to start receiving 503 errors either. Whats a good setting that won't effect legitimate decent (I think I just committed a crime calling some of these companies decent?) crawlers like Google, Bing, Baidu, Yandex etc. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269862,269862#msg-269862 From nginx-forum at forum.nginx.org Tue Sep 27 06:07:11 2016 From: nginx-forum at forum.nginx.org (atulhost) Date: Tue, 27 Sep 2016 02:07:11 -0400 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> Message-ID: <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Hi Mastercan, As of now NGINX is supporting HTTP/2 Natively here is how to activate it. https://atulhost.com/enable-http2-nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269749,269863#msg-269863 From francis at daoine.org Tue Sep 27 07:56:49 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 27 Sep 2016 08:56:49 +0100 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} In-Reply-To: <6f2dc7facedab30af711a1ed60e7adf1.NginxMailingListEnglish@forum.nginx.org> References: <6f2dc7facedab30af711a1ed60e7adf1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160927075649.GB11677@daoine.org> On Mon, Sep 26, 2016 at 07:41:12PM -0400, c0nw0nk wrote: Hi there, > Whats a good setting that won't effect legitimate decent (I think I just > committed a crime calling some of these companies decent?) crawlers like > Google, Bing, Baidu, Yandex etc. Look at your logs for traffic from Google, Bing, Baidu, Yandex etc. How many of these requests do they make per second, when they make requests? Set your limits to allow those. Note that any of the clients can change their crawl-rate on your site at any time, so any number that worked for you yesterday will not necessarily work tomorrow. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 27 08:59:16 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 27 Sep 2016 04:59:16 -0400 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} In-Reply-To: <20160927075649.GB11677@daoine.org> References: <20160927075649.GB11677@daoine.org> Message-ID: General rule of thumb is set it as low as possible, as soon as 503's are getting your users upset or resources are getting blocked, then double the values, keep an eye on the logs, double it one more time when required, done. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269862,269866#msg-269866 From oscaretu at gmail.com Tue Sep 27 09:17:18 2016 From: oscaretu at gmail.com (oscaretu .) Date: Tue, 27 Sep 2016 11:17:18 +0200 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} In-Reply-To: References: <20160927075649.GB11677@daoine.org> Message-ID: Perhaps this can help: http://stackoverflow.com/questions/12022429/http-status-code-for-overloaded-server Another option is a *429 - Too Many Requests* response. Defined in RFC6585 - http://tools.ietf.org/html/rfc6585#section-4 The spec does not define how the origin server identifies the user, nor how it counts requests. For example, an origin server that is limiting request rates can do so based upon counts of requests on a per-resource basis, across the entire server, or even among a set of servers. Likewise, it might identify the user by its authentication credentials, or a stateful cookie. Also see the Retry-After header in the response. Kind regards, Oscar On Tue, Sep 27, 2016 at 10:59 AM, itpp2012 wrote: > General rule of thumb is set it as low as possible, as soon as 503's are > getting your users upset or resources are getting blocked, then double the > values, keep an eye on the logs, double it one more time when required, > done. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269862,269866#msg-269866 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 27 09:44:35 2016 From: nginx-forum at forum.nginx.org (Cabchinoe) Date: Tue, 27 Sep 2016 05:44:35 -0400 Subject: nginx with openssl? Message-ID: <9c81bc59ee225408579fbf0d878190e9.NginxMailingListEnglish@forum.nginx.org> I replaced openssl library with the newest one,1.0.1u, and then compiled and make install the newest Nginx, why Nginx still run with old openssl? Please Help!! [root at 10-4-28-10 modules]# /usr/sbin/nginx -V nginx version: nginx/1.11.4 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) built with OpenSSL 1.0.1u 22 Sep 2016 (running with OpenSSL 1.0.1e-fips 11 Feb 2013) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_perl_module=dynamic --add-dynamic-module=../njs/nginx --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_v2_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' ldconfig -v|grep ssl ldconfig: Path `/usr/lib64/mysql' given more than once /usr/local/ssl/lib: libssl.so.1.0.0 -> libssl.so libssl3.so -> libssl3.so libevent_openssl-2.0.so.5 -> libevent_openssl-2.0.so.5.1.9 libssl.so.1.0.0 -> libssl.so.10 [root at 10-4-28-10 lib64]# pwd /usr/lib64 [root at 10-4-28-10 lib64]# ll |grep ssl lrwxrwxrwx 1 root root 29 Jul 8 16:03 libevent_openssl-2.0.so.5 -> libevent_openssl-2.0.so.5.1.9 -rwxr-xr-x 1 root root 21736 May 11 13:04 libevent_openssl-2.0.so.5.1.9 -rwxr-xr-x 1 root root 270808 May 11 16:10 libssl3.so lrwxrwxrwx 1 root root 28 Sep 27 15:41 libssl.so -> /usr/local/ssl/lib/libssl.so lrwxrwxrwx 1 root root 28 Sep 27 17:03 libssl.so.10 -> /usr/local/ssl/lib/libssl.so lrwxrwxrwx 1 root root 16 Sep 27 16:52 libssl.so.1.0.0 -> libssl.so.1.0.1e lrwxrwxrwx 1 root root 28 Sep 27 16:03 libssl.so.1.0.1e -> /usr/local/ssl/lib/libssl.so drwxr-xr-x. 3 root root 4096 Sep 27 16:04 openssl Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269868,269868#msg-269868 From nginx-forum at forum.nginx.org Tue Sep 27 11:30:08 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 27 Sep 2016 07:30:08 -0400 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} In-Reply-To: <6f2dc7facedab30af711a1ed60e7adf1.NginxMailingListEnglish@forum.nginx.org> References: <6f2dc7facedab30af711a1ed60e7adf1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Not sure if it can help you, because only some bots respect it and not in the same way, but you could look at the "crawl-delay" directive in the robots.txt file: https://en.wikipedia.org/wiki/Robots_exclusion_standard#Crawl-delay_directive Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269862,269869#msg-269869 From nginx-forum at forum.nginx.org Tue Sep 27 11:34:18 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 27 Sep 2016 07:34:18 -0400 Subject: performance hit in using too many if's In-Reply-To: <20160926100923.5468245.15989.11206@lazygranch.com> References: <20160926100923.5468245.15989.11206@lazygranch.com> Message-ID: <35323d551f8b6ef4e32545e865555ed9.NginxMailingListEnglish@forum.nginx.org> Hello, You just need fail2ban and no need to know Perl. But you'll probably need to know regular expressions. Fail2ban can be adapted to most log format, but nginx logs format is the same as apache, so it's easy :-) I'm sure you can find many tutorials to explain how to install and configure it by searching "fail2ban apache" or "fail2ban nginx". Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269808,269870#msg-269870 From nginx-forum at forum.nginx.org Tue Sep 27 14:24:03 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 27 Sep 2016 10:24:03 -0400 Subject: Recommended limit_req and limit_conn for location ~ \.php$ {} In-Reply-To: References: Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Mon, Sep 26, 2016 at 07:41:12PM -0400, c0nw0nk wrote: > > Hi there, > > > Whats a good setting that won't effect legitimate decent (I think I > just > > committed a crime calling some of these companies decent?) crawlers > like > > Google, Bing, Baidu, Yandex etc. > > Look at your logs for traffic from Google, Bing, Baidu, Yandex etc. > How > many of these requests do they make per second, when they make > requests? > > Set your limits to allow those. > > Note that any of the clients can change their crawl-rate on your site > at > any time, so any number that worked for you yesterday will not > necessarily > work tomorrow. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx oscaretu . Wrote: ------------------------------------------------------- > Perhaps this can help: > > http://stackoverflow.com/questions/12022429/http-status-code-for-overl > oaded-server > > Another option is a *429 - Too Many Requests* response. > > Defined in RFC6585 - http://tools.ietf.org/html/rfc6585#section-4 > > The spec does not define how the origin server identifies the user, > nor how > it counts requests. > > For example, an origin server that is limiting request rates can do so > based upon counts of requests on a per-resource basis, across the > entire > server, or even among a set of servers. > > Likewise, it might identify the user by its authentication > credentials, or > a stateful cookie. > > Also see the Retry-After header in the response. > > Kind regards, > Oscar > > > On Tue, Sep 27, 2016 at 10:59 AM, itpp2012 > > wrote: > > > General rule of thumb is set it as low as possible, as soon as 503's > are > > getting your users upset or resources are getting blocked, then > double the > > values, keep an eye on the logs, double it one more time when > required, > > done. > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,269862,269866#msg-269866 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Oscar Fernandez Sierra > oscaretu at gmail.com > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks raises a question of why the default status code on limit_conn and limit_req is a 503 instead of a 429. limit_conn_status 503; limit_req_status 503; But under a DoS attack I always feel those values would be better being "444" since the server won't respond and cut's the connection rather than waste bandwidth on a client who is opening and closing connections fast as a bullet. You tend to get allot of 499's in your access.log when really you should not even waste your bandwidth output trying to respond to them. Is it worth me changing the default status codes or keeping them at default. Also I am going to start out with itpp2012's advice setting it at the low value and increasing it based on what companies crawlers require a more brutal crawl rate Francis Daly's idea is also good though but since I use "burst=5" instead of "nodelay" on my request limit I get to find out what crawlers would actually wait for the current request that they are experiencing a loading page on to finish. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269862,269872#msg-269872 From lists at lazygranch.com Tue Sep 27 16:44:16 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 27 Sep 2016 09:44:16 -0700 Subject: 444 return code and rate limiting Message-ID: <20160927164416.5468243.27928.11310@lazygranch.com> I pulled this off the rate limiting thread since I think the 444 return is a good topic all on its own. "But under a DoS attack I always feel those values would be better being "444" since the server won't respond and cut's the connection rather than waste bandwidth on a client who is opening and closing connections fast as a bullet.?" Looking at the times in my nginx access.log, I don't believe any connection is cut. Rather nginx just doesn't respond. A browser will wait an appropriate amount of time before it decides there is no response, but the code from the hackers just keeps hammering the server.? What I don't know is if the 444 return effects the nginx rate limiting coding since you have effectively not returned anything, so what is there to limit? The experiment would be to hammer your webserver from the server itself rather than over the Internet, and see if it does get rate limited. That would take network losses out of the picture.? When I get a chance, I'm going to pastebin the logs from that attack I got from the Hong Kong server so the times can be seen.? From reallfqq-nginx at yahoo.fr Tue Sep 27 17:06:00 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 27 Sep 2016 19:06:00 +0200 Subject: 444 return code and rate limiting In-Reply-To: <20160927164416.5468243.27928.11310@lazygranch.com> References: <20160927164416.5468243.27928.11310@lazygranch.com> Message-ID: Responding 444 is... a response. It is not anything else than a (non-standard, thus 'unknown', just like 499 nginx chose to illustrate client-side premature disconnection) HTTP status code as any other. Some speedup might come from using return instead of doing further processing, but there is still a connection, data sent, data processed and response sent. Basically resources are being burned up and not available for another request/client. HTTP status code do not do anything by themselves, they are just part of a protocol legitimate clients implement. I do not think attackers care much about status code other than what they guess about the server. In case of DoS, your only hope is to divert/absorb the flow. ?Blabbering about status codes? as a potential solution to DoS shows deep misunderstanding of what is being discussed and is pure nonsense. --- *B. R.* On Tue, Sep 27, 2016 at 6:44 PM, wrote: > I pulled this off the rate limiting thread since I think the 444 return is > a good topic all on its own. > > "But under a DoS attack I always feel those values would be better being > "444" since the server won't respond and cut's the connection rather than > waste bandwidth on a client who is opening and closing connections fast as > a > bullet.?" > > Looking at the times in my nginx access.log, I don't believe any > connection is cut. Rather nginx just doesn't respond. A browser will wait > an appropriate amount of time before it decides there is no response, but > the code from the hackers just keeps hammering the server. > > What I don't know is if the 444 return effects the nginx rate limiting > coding since you have effectively not returned anything, so what is there > to limit? > > The experiment would be to hammer your webserver from the server itself > rather than over the Internet, and see if it does get rate limited. That > would take network losses out of the picture. > > When I get a chance, I'm going to pastebin the logs from that attack I got > from the Hong Kong server so the times can be seen. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 27 17:09:41 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 27 Sep 2016 13:09:41 -0400 Subject: Uneven High Load on the Nginx Server Message-ID: <8847ed534cc07380827045efebca0efa.NginxMailingListEnglish@forum.nginx.org> We are having two Nginx Server acting as Caching Server behind haproxy loadbalancer. We are observing a high load on one of the server though we see equal number of requests coming on the server from application per sec. We see that out of two server on which load is high i.e around 5 , response time /latency is high in delivering the content . On same server attached stats module screenshot shows more number of requests in "Writing" as comapred to other one on which load is 0.5 and response time /latency is also low . Please help what might be causing high load and high number of writing on one of server. Active connections: 8619 server accepts handled requests 33204889 33204889 38066647 Reading: 0 Writing: 755 Waiting: 7863 Active connections: 10959 server accepts handled requests 34625312 34625312 39974933 Reading: 0 Writing: 3700 Waiting: 7259 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,269874#msg-269874 From lists at lazygranch.com Tue Sep 27 17:14:14 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 27 Sep 2016 10:14:14 -0700 Subject: 444 return code and rate limiting In-Reply-To: References: <20160927164416.5468243.27928.11310@lazygranch.com> Message-ID: <20160927171414.5468243.94818.11314@lazygranch.com> ?Your reply does not agree with the documentation.? ?https://httpstatuses.com/444 ? ? Original Message ? From: B.R. Sent: Tuesday, September 27, 2016 10:09 AM To: nginx ML Reply To: nginx at nginx.org Subject: Re: 444 return code and rate limiting Responding 444 is... a response. It is not anything else than a (non-standard, thus 'unknown', just like 499 nginx chose to illustrate client-side premature disconnection) HTTP status code as any other. Some speedup might come from using return instead of doing further processing, but there is still a connection, data sent, data processed and response sent. Basically resources are being burned up and not available for another request/client. HTTP status code do not do anything by themselves, they are just part of a protocol legitimate clients implement. I do not think attackers care much about status code other than what they guess about the server. In case of DoS, your only hope is to divert/absorb the flow. ?Blabbering about status codes? as a potential solution to DoS shows deep misunderstanding of what is being discussed and is pure nonsense. --- B. R. On Tue, Sep 27, 2016 at 6:44 PM, wrote: I pulled this off the rate limiting thread since I think the 444 return is a good topic all on its own. "But under a DoS attack I always feel those values would be better being "444" since the server won't respond and cut's the connection rather than waste bandwidth on a client who is opening and closing connections fast as a bullet.?" Looking at the times in my nginx access.log, I don't believe any connection is cut. Rather nginx just doesn't respond. A browser will wait an appropriate amount of time before it decides there is no response, but the code from the hackers just keeps hammering the server.? What I don't know is if the 444 return effects the nginx rate limiting coding since you have effectively not returned anything, so what is there to limit? The experiment would be to hammer your webserver from the server itself rather than over the Internet, and see if it does get rate limited. That would take network losses out of the picture.? When I get a chance, I'm going to pastebin the logs from that attack I got from the Hong Kong server so the times can be seen.? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Sep 27 17:57:37 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Sep 2016 20:57:37 +0300 Subject: Uneven High Load on the Nginx Server In-Reply-To: <8847ed534cc07380827045efebca0efa.NginxMailingListEnglish@forum.nginx.org> References: <8847ed534cc07380827045efebca0efa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160927175737.GL73038@mdounin.ru> Hello! On Tue, Sep 27, 2016 at 01:09:41PM -0400, anish10dec wrote: > We are having two Nginx Server acting as Caching Server behind haproxy > loadbalancer. We are observing a high load on one of the server though we > see equal number of requests coming on the server from application per sec. > > We see that out of two server on which load is high i.e around 5 , response > time /latency is high in delivering the content . On same server attached > stats module screenshot shows more number of requests in "Writing" as > comapred to other one on which load is 0.5 and response time /latency is > also low . > > Please help what might be causing high load and high number of writing on > one of server. > > Active connections: 8619 > server accepts handled requests > 33204889 33204889 38066647 > Reading: 0 Writing: 755 Waiting: 7863 > > > Active connections: 10959 > server accepts handled requests > 34625312 34625312 39974933 > Reading: 0 Writing: 3700 Waiting: 7259 When nginx is reading request headers from a connection, it will be counted as "reading". Once the request header is completely read, the connection will be counted as "writing" till the request is complete. That is, there are lots of various factors which can affect number of writing connections, in particular: - a cache is not working for some reason, causing more requests (compared to your 2nd server) to be passed to upstream; - the upstream server configured is slow; - the disk subsystem is slow/overloaded and can't cope with load, causing nginx to spend more time reading cached responses from disk. Given that you also see load 5 and assuming you are using Linux, I would suggests that the problem is disk subsystem, as processes waiting for disk are counted by Linux in the load average. You may want to take a closer look at your disk subsystem. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Sep 27 18:20:31 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Tue, 27 Sep 2016 14:20:31 -0400 Subject: Uneven High Load on the Nginx Server In-Reply-To: <20160927175737.GL73038@mdounin.ru> References: <20160927175737.GL73038@mdounin.ru> Message-ID: <6d81ad4d462933f71aab4f57ae0b392d.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim We enabled the upstream_request_time on both the server which shows response time less than a sec for Upstream request. It doesn't seems to be issue with Upstream Server . Even for the request which are HIT response time on the server on which "Writing" is more varies from 10 sec to 60 sec and even more while on other its less than 2 sec either its MISS or HIT. Once we restart the nginx service the load reduces and so the response time, but same comes back in 5 min duration. And if we stop the Nginx service on that server load decreases and same load gets shifted to other with high "Writing" value. We have the Cache Directory Mounted on SSD as well as RAM . We are using CentOS 6.5 with both IPV4 and IPV6 enabled on the server. Application/User Traffic directly connect to Server over IPV4/IPV6 IP . Output of iostat: avg-cpu: %user %nice %system %iowait %steal %idle 0.17 0.00 0.60 0.00 0.00 99.22 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 11.75 24.82 402.35 659063678 10683772144 sdb 454.37 2180.70 3663.85 57905655098 97288688008 sdc 565.74 1771.80 4512.49 47047922584 119823249968 dm-0 0.00 0.00 0.00 5136 0 dm-1 0.05 0.40 0.30 10621106 7853376 dm-2 1040.29 3952.19 8176.34 104945070786 217111937976 dm-3 1.01 1.40 8.06 37053274 214028160 dm-4 33.97 2.39 271.65 63438578 7213330456 dm-5 15.55 20.55 122.34 545791730 3248559664 avg-cpu: %user %nice %system %iowait %steal %idle 0.58 0.00 4.52 0.02 0.00 94.88 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 181.00 0.00 34.60 0.00 1724.80 49.85 0.01 0.17 0.09 0.30 sdb 0.00 68.40 46.40 781.40 10710.40 6798.40 21.15 0.31 0.37 0.03 2.88 sdc 0.00 26.00 59.80 552.60 13425.60 4628.80 29.48 0.28 0.46 0.05 3.22 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 106.20 1428.40 24136.00 11427.20 23.17 0.79 0.52 0.04 6.00 dm-3 0.00 0.00 0.00 0.20 0.00 1.60 8.00 0.00 3.00 3.00 0.06 dm-4 0.00 0.00 0.00 212.00 0.00 1696.00 8.00 0.13 0.61 0.00 0.08 dm-5 0.00 0.00 0.00 3.40 0.00 27.20 8.00 0.00 0.47 0.47 0.16 " Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269874,269878#msg-269878 From nginx-forum at forum.nginx.org Tue Sep 27 18:42:23 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 27 Sep 2016 14:42:23 -0400 Subject: 444 return code and rate limiting In-Reply-To: <20160927171414.5468243.94818.11314@lazygranch.com> References: <20160927171414.5468243.94818.11314@lazygranch.com> Message-ID: It is a response by the time the 444 is served it is to late a true DDoS is not about what the server outputs its about what it can receive you can't expect incoming traffic that amounts to 600Gbps to be prevented by a 1Gbps port it does not work like that Nginx is an Application preventing any for of DoS at an application level is a bad idea it needs to be stopped at a router level before it hits the server to consume your receiving capacity of 1Gbps. Adding IP address denies for DDoS to the Nginx .conf file at the application level is to late still also the connection has been made the request headers / data of 100kb or less what ever the client sent has been received on your 1Gig port its already consuming your connection. The only scenario I can think of where returning 444 is a good idea is under a single IP flooding "DoS" because then your not increasing your ports bandwidth output responding to someone who is opening and closing a connection, But in this scenario its more like they are trying to make your server DoS itself by making it max out its own outgoing bandwidth to just their connection alone so nobody else can receive anything. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269873,269879#msg-269879 From lists at lazygranch.com Tue Sep 27 19:12:47 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 27 Sep 2016 12:12:47 -0700 Subject: 444 return code and rate limiting In-Reply-To: References: <20160927171414.5468243.94818.11314@lazygranch.com> Message-ID: <20160927191247.5468243.85710.11322@lazygranch.com> If you dig through some old posts, it was established that the deny feature of nginx isn't very effective at limiting? network activity. I deny at the firewall.? What remains is if you should deny dynamically or statically. ? ? Original Message ? From: c0nw0nk Sent: Tuesday, September 27, 2016 11:42 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: 444 return code and rate limiting It is a response by the time the 444 is served it is to late a true DDoS is not about what the server outputs its about what it can receive you can't expect incoming traffic that amounts to 600Gbps to be prevented by a 1Gbps port it does not work like that Nginx is an Application preventing any for of DoS at an application level is a bad idea it needs to be stopped at a router level before it hits the server to consume your receiving capacity of 1Gbps. Adding IP address denies for DDoS to the Nginx .conf file at the application level is to late still also the connection has been made the request headers / data of 100kb or less what ever the client sent has been received on your 1Gig port its already consuming your connection. The only scenario I can think of where returning 444 is a good idea is under a single IP flooding "DoS" because then your not increasing your ports bandwidth output responding to someone who is opening and closing a connection, But in this scenario its more like they are trying to make your server DoS itself by making it max out its own outgoing bandwidth to just their connection alone so nobody else can receive anything. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269873,269879#msg-269879 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Sep 27 19:28:16 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 27 Sep 2016 15:28:16 -0400 Subject: 444 return code and rate limiting In-Reply-To: <20160927191247.5468243.85710.11322@lazygranch.com> References: <20160927191247.5468243.85710.11322@lazygranch.com> Message-ID: <5c78490cbc70bd11d0c546ed4a8e4b34.NginxMailingListEnglish@forum.nginx.org> What I would say to do is write IP's from your toolkit or what ever you are using for reading your access.log and those that trigger and spam the 503 error within milliseconds or what ever range it is you can do an API call and add those IP's to be blocked at a router level. With CloudFlare you can have CloudFlare block those IP's before they reach your server like so https://api.cloudflare.com/#user-level-firewall-access-rule-properties If you use OVH you can write the IP's that trigger 503's to OVH's Firewall https://api.ovh.com/console/#/ip/{ip}/firewall#POST This should be of interest too https://twitter.com/olesovhcom/status/779297257199964160 But anything the firewall does not get your server now has a way to communicate with your router / firewall to prevent this requests even hitting the machine any more. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269873,269881#msg-269881 From patsy46hobartwyvc at aol.com Tue Sep 27 22:16:38 2016 From: patsy46hobartwyvc at aol.com (xcvg xf hetj) Date: Tue, 27 Sep 2016 18:16:38 -0400 Subject: eagfaz Message-ID: <1576db89e54-228-2372@webprd-m21.mail.aol.com> azeg?"(hyh -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernandez at azeus.com Wed Sep 28 09:34:58 2016 From: jhernandez at azeus.com (jhernandez) Date: Wed, 28 Sep 2016 17:34:58 +0800 Subject: Inquiry regarding support for OpenSSL 1.0.2i Message-ID: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> Hello, We've recently received a notification regarding a vulnerability in OpenSSL: OCSP Status Request extension unbounded memory growth (CVE-2016-6304) This is fixed in OpenSSL v1.0.2i We're running an Nginx proxy server on Windows 2012 R2 and are currently using Nginx 1.9.9 - with OpenSSL 1.0.2e We do plan to upgrade to the latest stable nginx-1.10.1, but it seems this version for Windows was compiled with OpenSSL 1.0.2*h*. Any idea when a new stable or mainline version will come out with OpenSSL 1.0.2i support ? Alternatively, we're also looking to build a custom 1.10.1 with the OpenSSL 1.0.2i library with the instructions here: http://nginx.org/en/docs/howto_build_on_win32.html But we're not sure if 1.10.1 would support OpenSSL 1.0.2i. Has anyone tried this approach before ? Thanks! -Patrick Hernandez -------------- next part -------------- An HTML attachment was scrubbed... URL: From JEDC at ramboll.com Wed Sep 28 11:01:36 2016 From: JEDC at ramboll.com (Jens Dueholm Christensen) Date: Wed, 28 Sep 2016 11:01:36 +0000 Subject: Static or dynamic content Message-ID: Hi I've got an issue where nginx (see below for version/compile options) returns a 405 (not allowed) to POST requests to clients when the upstream proxy returns a 503. I know nginx doesn't allow posts to static content, but since all content (even static js, png etc) is served by upstream I cannot really understand why this happens since everything from upstream ought to be dynamic. My config is basicly this location / { root /somewhere/; try_files /offline.html @xact; add_header Cache-Control "no-cache, max-age=0, no-store, must-revalidate"; } location @xact { proxy_pass http://127.0.0.1:4431; proxy_redirect default; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_read_timeout 1200; } The server behind 127.0.0.1:4431 is HAProxy that distributes requests to a number of backend instances. All content is served from those instances and nginx does not have access to content from anywhere else. Once in a blue moon HAProxy does not have any available server to send a request to, and it generates an internal 503 "Service unavailable" page. When the request method is a POST nginx transforms this to a 405 "not allowed", which is - as far as I can understand from a number of posts found googling - is to be expected when POSTing to a static resource. An individual backend instance can also return a 503 error if it encounters a problem servicing a request. That 503 response is sent back to HAProxy, which parses it along to nginx, which returns the 503 to the client (and it's NOT converted into a 405!), so I know that nginx doesn't translate all 503s to 405s if it's generated by a backend instance even if the request is POSTed. This leads me to think that somehow HAProxy does not provide the correct headers in the 503 errorpage to make nginx assume the response from HAProxy isn't dynamic. The internally created 503 page that HAProxy creates is this (since it's sent more or less "as is" on the TCP socket it contains HTTP lingo - see https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-errorfile for details if questions arrise): --- HTTP/1.0 503 Service Unavailable Cache-Control: no-cache Connection: close Content-Type: text/html

503 Service Unavailable

No server is available to handle this request. --- What headers do I need to have HAProxy add in order for nginx not to turn the 503 into a 405 and see the request (or at least the response) was to/from dynamic content? $ nginx -V nginx version: nginx/1.8.0 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/local --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=nginx --group=nginx --with-http_realip_module --with-http_ssl_module --with-http_stub_status_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-http_ssi_module Regards, Jens Dueholm Christensen -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 28 12:12:46 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 28 Sep 2016 08:12:46 -0400 Subject: Inquiry regarding support for OpenSSL 1.0.2i In-Reply-To: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> References: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> Message-ID: <7a3c9ff65e329f34a68f8e224ffa7c74.NginxMailingListEnglish@forum.nginx.org> Try this one http://nginx-win.ecsds.eu/ with 1.0.2j Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269889,269898#msg-269898 From vbart at nginx.com Wed Sep 28 12:30:48 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 28 Sep 2016 15:30:48 +0300 Subject: Inquiry regarding support for OpenSSL 1.0.2i In-Reply-To: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> References: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> Message-ID: <2262245.hhmPembsZh@vbart-workstation> On Wednesday 28 September 2016 17:34:58 jhernandez wrote: > Hello, > > We've recently received a notification regarding a vulnerability in > OpenSSL: > OCSP Status Request extension unbounded memory growth (CVE-2016-6304) > This is fixed in OpenSSL v1.0.2i > > We're running an Nginx proxy server on Windows 2012 R2 and are currently > using Nginx 1.9.9 - with OpenSSL 1.0.2e > We do plan to upgrade to the latest stable nginx-1.10.1, but it seems > this version for Windows was compiled with OpenSSL 1.0.2*h*. > > Any idea when a new stable or mainline version will come out with > OpenSSL 1.0.2i support ? > Alternatively, we're also looking to build a custom 1.10.1 with the > OpenSSL 1.0.2i library with the instructions here: > http://nginx.org/en/docs/howto_build_on_win32.html > But we're not sure if 1.10.1 would support OpenSSL 1.0.2i. Has anyone > tried this approach before ? > http://mailman.nginx.org/pipermail/nginx/2016-September/051914.html wbr, Valentin V. Bartenev From yushang at outlook.com Wed Sep 28 15:41:38 2016 From: yushang at outlook.com (shang yu) Date: Wed, 28 Sep 2016 15:41:38 +0000 Subject: nginx can't spawn sub process Message-ID: Hi dear all, I compiled nginx 1.8.1 from source on Windows XP . when I run it , it can spawn sub process but the sub process crashed immediately . BTW , because I did not install cygwin on my system . I create the needed files ngx_auto_headers.h ngx_auto_config.h and ngx_modules.c ngx_auto_headers.h is #ifndef _NGX_AUTO_H #define _NGX_AUTO_H #ifndef NGX_WIN32 #define NGX_WIN32 1 #endif #endif ngx_auto_config.h is #ifndef _NGX_AUTO_CFG #define _NGX_AUTO_CFG #define NGX_CONFIGURE "manual" #define NGX_WIN32 1 #define NGX_CPU_CACHE_LINE 32 #define NGX_CONF_PATH "conf/nginx.conf" #define NGX_ERROR_LOG_PATH "logs/error.log" #define NGX_PID_PATH "logs/nginx.pid" #define NGX_LOCK_PATH "logs/nginx.lock" #define NGX_HTTP_LOG_PATH "logs/access.log" #define NGX_HTTP_CLIENT_TEMP_PATH "temp/client_body_temp" #define NGX_HTTP_PROXY_TEMP_PATH "temp/proxy_temp" #define NGX_HTTP_FASTCGI_TEMP_PATH "temp/fastcgi_temp" #define NGX_HTTP_UWSGI_TEMP_PATH "temp/uwsgi_temp" #define NGX_HTTP_SCGI_TEMP_PATH "temp/scgi_temp" #define NGX_HAVE_SELECT 1 #define NGX_HAVE_AIO 1 #define NGX_HAVE_IOCP 1 // http modules #endif ngx_modules.c is #include #include extern ngx_module_t ngx_core_module; extern ngx_module_t ngx_errlog_module; extern ngx_module_t ngx_conf_module; extern ngx_module_t ngx_events_module; extern ngx_module_t ngx_event_core_module; extern ngx_module_t ngx_http_module; extern ngx_module_t ngx_http_core_module; extern ngx_module_t ngx_http_log_module; extern ngx_module_t ngx_http_upstream_module; extern ngx_module_t ngx_http_static_module; extern ngx_module_t ngx_http_index_module; ngx_module_t *ngx_modules[] = { &ngx_core_module, &ngx_errlog_module, &ngx_conf_module, &ngx_events_module, &ngx_event_core_module, // http modules &ngx_http_module, &ngx_http_core_module, &ngx_http_log_module, &ngx_http_upstream_module, &ngx_http_static_module, &ngx_http_index_module, NULL }; what are missed which may cause the crash ? many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Wed Sep 28 15:50:44 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Wed, 28 Sep 2016 11:50:44 -0400 Subject: Inquiry regarding support for OpenSSL 1.0.2i In-Reply-To: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> References: <2167577b-f5e2-df50-1b5c-71199fc38b84@azeus.com> Message-ID: <9728673B-A79A-4C51-89AF-CBE02D074BC4@2xlp.com> On Sep 28, 2016, at 5:34 AM, jhernandez wrote: > But we're not sure if 1.10.1 would support OpenSSL 1.0.2i. Has anyone tried this approach before ? FYI, OpenSSL 1.1 and 1.02 branches had security fixes on 9/26 to their 9/22 releases The current releases are: 1.0.2j 1.1.0b From mdounin at mdounin.ru Wed Sep 28 15:53:49 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Sep 2016 18:53:49 +0300 Subject: nginx can't spawn sub process In-Reply-To: References: Message-ID: <20160928155349.GO73038@mdounin.ru> Hello! On Wed, Sep 28, 2016 at 03:41:38PM +0000, shang yu wrote: > I compiled nginx 1.8.1 from source on Windows XP . when I run it > , it can spawn sub process but the sub process crashed > immediately . > > BTW , because I did not install cygwin on my system . I create > the needed files ngx_auto_headers.h ngx_auto_config.h and > ngx_modules.c You don't need Cygwin, but you have to install MSYS to properly run configure. By trying to create required files yourself you are looking for troubles. See here for recommended steps if you want to built nginx on Windows yourself: http://nginx.org/en/docs/howto_build_on_win32.html -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Sep 28 16:44:45 2016 From: nginx-forum at forum.nginx.org (hotwirez) Date: Wed, 28 Sep 2016 12:44:45 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150413115739.GA88631@mdounin.ru> References: <20150413115739.GA88631@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sun, Apr 12, 2015 at 12:21:19PM -0400, numroo wrote: > > > >> Yes, I ran the s_client command multiple times to account for the > nginx > > >> responder delay. I was testing OCSP stapling on just one of my > domains. > > >> Then I read that the 'default_server' SSL server also has to have > OCSP > > >> stapling enabled for vhost OCSP stapling to work: > > >> > > >> https://gist.github.com/konklone/6532544 > > > > > >There is no such a requirement. > > > > I have the same problem here. > > > > openssl s_client -servername ${WEBSITE} -connect ${WEBSITE}:443 > -tls1 > > -tlsextdebug -status|grep OCSP > > > > Always returns the following on all virtual hosts no matter on how > many > > times I try: > > OCSP response: no response sent > > > > But as soon that I disable my self-signed default host and restart > Nginx, I > > get a successfull repsonse on the second request on all CA signed > hosts: > > OCSP Response Status: successful (0x0) > > As previously suggested, tests with trivial config and debugging > log may help to find out what goes wrong. > I wanted to mention that I've run into this issue as well when trying to enable OCSP stapling, where I have a default_deny SSL server that has a self-signed certificate where I don't want to use OCSP stapling, and other actual server entries for actual sites where I want OCSP stapling enabled. If I enable stapling for only the real sites, it appears to be silently disabled. If I enable it for all server instances, it notices that the default server uses a self-signed certificate and disables stapling. If I remove the default server, OCSP stapling works for the remaining sites, but then accessing the site using a means other than the correct server name provides the SSL certificate for one of the servers. I tried enabling the debug log but there are no [debug] entries containing anything about OCSP in any of the above instances (only a [warn] entry is added when the self-signed certificate is configured for the default server with OCSP stapling enabled). It would seem to me that for a parameter in the server {} block to be affected by the parameter's value in other server {} blocks is a bug. I apologize for coming to the show late; I hadn't cared about optimizing SSL as much until more recently, and I haven't been able to find anyone discussing this issue aside from here (and on various how-to's generally describing the behavior I have confirmed through testing). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,257833,269916#msg-269916 From reallfqq-nginx at yahoo.fr Wed Sep 28 16:50:12 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 28 Sep 2016 18:50:12 +0200 Subject: 444 return code and rate limiting In-Reply-To: <20160927191247.5468243.85710.11322@lazygranch.com> References: <20160927171414.5468243.94818.11314@lazygranch.com> <20160927191247.5468243.85710.11322@lazygranch.com> Message-ID: If you are to quote what you call documentation, please use some real one: http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names What I said before remains valid: accepting connection, reading request & writing response use resources, by design, even if you thn close the connection. When dealing with DoS, I suspect Web servers and WAF (even worse, trying to transform a Web server in a WAF!) are inefficient compared to lower-level tools. Use tools best suitable to the job... DoS is all about processing capacity vs incoming flow. Augmenting the processing consumption reduces capacity. Issuing simple return costs less than using maps, which in turn is better than processing more stuff. If your little collection sustains your targeted incoming flow then you win, otherwise you lose. Blatantly obvious assertions if you ask me... I do not know what you are trying to achieve here. Neither do you as it seems, or you would not be asking questions about it. ?Good luck, though.? --- *B. R.* On Tue, Sep 27, 2016 at 9:12 PM, wrote: > If you dig through some old posts, it was established that the deny > feature of nginx isn't very effective at limiting? network activity. I deny > at the firewall. > > What remains is if you should deny dynamically or statically. ? > > Original Message > From: c0nw0nk > Sent: Tuesday, September 27, 2016 11:42 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: 444 return code and rate limiting > > It is a response by the time the 444 is served it is to late a true DDoS is > not about what the server outputs its about what it can receive you can't > expect incoming traffic that amounts to 600Gbps to be prevented by a 1Gbps > port it does not work like that Nginx is an Application preventing any for > of DoS at an application level is a bad idea it needs to be stopped at a > router level before it hits the server to consume your receiving capacity > of > 1Gbps. > > Adding IP address denies for DDoS to the Nginx .conf file at the > application > level is to late still also the connection has been made the request > headers > / data of 100kb or less what ever the client sent has been received on your > 1Gig port its already consuming your connection. > > The only scenario I can think of where returning 444 is a good idea is > under > a single IP flooding "DoS" because then your not increasing your ports > bandwidth output responding to someone who is opening and closing a > connection, But in this scenario its more like they are trying to make your > server DoS itself by making it max out its own outgoing bandwidth to just > their connection alone so nobody else can receive anything. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269873,269879#msg-269879 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Wed Sep 28 17:30:35 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 28 Sep 2016 10:30:35 -0700 Subject: 444 return code and rate limiting In-Reply-To: References: <20160927171414.5468243.94818.11314@lazygranch.com> <20160927191247.5468243.85710.11322@lazygranch.com> Message-ID: <20160928173035.5468243.49920.11422@lazygranch.com> An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Wed Sep 28 17:58:24 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 28 Sep 2016 19:58:24 +0200 Subject: 444 return code and rate limiting In-Reply-To: <20160928173035.5468243.49920.11422@lazygranch.com> References: <20160927171414.5468243.94818.11314@lazygranch.com> <20160927191247.5468243.85710.11322@lazygranch.com> <20160928173035.5468243.49920.11422@lazygranch.com> Message-ID: Keep in mind a terminated connection (444) is not a valid HTTP response. Abruptly terminated connections may also be caused by broken middleware boxes or other things interrupting the connection. Modern browsers have retry mechanisms built in to safeguard against transient connection issues, for example, returning 444 to a Firefox client will cause it to retry the request up to 10 (!) times. This is the opposite of what you want in a rate limited scenario. Stick with 429 or 503. On Wed, Sep 28, 2016 at 7:30 PM, wrote: > If you just reply to these hackers, you will be "pinged" until oblivion. I > choose to fight, you don't. I have a different philosophy. I log the > offenders and if from a colo, VPS, etc., they can enjoy their lifetime ban. > Machines are not eyeballs. > > Drop the map? So do I stop looking for bad referrals and bad user agents > as well?? Maybe, just maybe, nginx was given these tools to be, well, used. > > Questions? I really don't have any burning questions since I don't expect > to use 444 as rate limiting. My only question was does it actually work in > limiting, as the other poster suggested. > > ?I assume you have evidence of the CPU cycles used by the map module. I > mean, you wouldn't just make stuff up, right? > > Running uptime, my server peaks at around 0.8, and usually runs between > 0.5 to 0.6. I don't see a problem here. > > Oh, and you can bet those clowns proving for WordPress vulnerabilities > today will be employing the next script kiddie to come along in the future. > > *From: *B.R. > *Sent: *Wednesday, September 28, 2016 9:57 AM > *To: *nginx ML > *Reply To: *nginx at nginx.org > *Subject: *Re: 444 return code and rate limiting > > If you are to quote what you call documentation, please use some real one: > http://nginx.org/en/docs/http/request_processing.html#how_ > to_prevent_undefined_server_names > > What I said before remains valid: accepting connection, reading request & > writing response use resources, by design, even if you thn close the > connection. > When dealing with DoS, I suspect Web servers and WAF (even worse, trying > to transform a Web server in a WAF!) are inefficient compared to > lower-level tools. > Use tools best suitable to the job... > > DoS is all about processing capacity vs incoming flow. Augmenting the > processing consumption reduces capacity. > > Issuing simple return costs less than using maps, which in turn is better > than processing more stuff. > If your little collection sustains your targeted incoming flow then you > win, otherwise you lose. > Blatantly obvious assertions if you ask me... > > I do not know what you are trying to achieve here. Neither do you as it > seems, or you would not be asking questions about it. > ?Good luck, though.? > --- > *B. R.* > > On Tue, Sep 27, 2016 at 9:12 PM, wrote: > >> If you dig through some old posts, it was established that the deny >> feature of nginx isn't very effective at limiting? network activity. I deny >> at the firewall. >> >> What remains is if you should deny dynamically or statically. ? >> >> Original Message >> From: c0nw0nk >> Sent: Tuesday, September 27, 2016 11:42 AM >> To: nginx at nginx.org >> Reply To: nginx at nginx.org >> Subject: Re: 444 return code and rate limiting >> >> It is a response by the time the 444 is served it is to late a true DDoS >> is >> not about what the server outputs its about what it can receive you can't >> expect incoming traffic that amounts to 600Gbps to be prevented by a 1Gbps >> port it does not work like that Nginx is an Application preventing any for >> of DoS at an application level is a bad idea it needs to be stopped at a >> router level before it hits the server to consume your receiving capacity >> of >> 1Gbps. >> >> Adding IP address denies for DDoS to the Nginx .conf file at the >> application >> level is to late still also the connection has been made the request >> headers >> / data of 100kb or less what ever the client sent has been received on >> your >> 1Gig port its already consuming your connection. >> >> The only scenario I can think of where returning 444 is a good idea is >> under >> a single IP flooding "DoS" because then your not increasing your ports >> bandwidth output responding to someone who is opening and closing a >> connection, But in this scenario its more like they are trying to make >> your >> server DoS itself by making it max out its own outgoing bandwidth to just >> their connection alone so nobody else can receive anything. >> >> Posted at Nginx Forum: https://forum.nginx.org/read.p >> hp?2,269873,269879#msg-269879 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Wed Sep 28 19:00:46 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 28 Sep 2016 12:00:46 -0700 Subject: nginx reverse proxy causing TCP queuing spikes In-Reply-To: References: Message-ID: >> I've been struggling with http response time slowdowns and >> corresponding spikes in my TCP Queuing graph in munin. I'm using >> nginx as a reverse proxy to apache which then hands off to my backend, >> and I think the proxy_read_timeout line in my nginx config is at least >> contributing to the issue. Here is all of my proxy config: >> >> proxy_read_timeout 60m; >> proxy_pass http://127.0.0.1:8080; >> >> I think this means I'm leaving connections open for 60 minutes after >> the last server response which sounds like a bad thing. However, some >> of my admin pages need to run for a long time while they wait for the >> server-side stuff to execute. I only use the proxy_read_timeout >> directive on my admin locations and I'm experiencing the TCP spikes >> and http slowdowns during the exact hours that the admin stuff is in >> use. > > > It turns out this issue was due to Odoo which also runs behind nginx > in a reverse proxy configuration on my machine. Has anyone else had > that kind of trouble with Odoo? I do think this is related to 'proxy_read_timeout 60m;' leaving too many connections open. Can I somehow allow pages to load for up to 60m but not bog my server down with too many connections? - Grant From rpaprocki at fearnothingproductions.net Wed Sep 28 19:16:52 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 28 Sep 2016 12:16:52 -0700 Subject: nginx reverse proxy causing TCP queuing spikes In-Reply-To: References: Message-ID: <0E696EE8-7810-4C7A-9A05-EBD3D38DA501@fearnothingproductions.net> > I do think this is related to 'proxy_read_timeout 60m;' leaving too > many connections open. Can I somehow allow pages to load for up to > 60m but not bog my server down with too many connections? Pardon me, but why on earth do you have an environment in which an HTTP request can take an hour? That seems like a serious abuse of the protocol. Keeping an HTTP request open means keeping the associated TCP connection open as well. If you have connections open for an hour, you're probably going to run into concurrency issues. From lists at lazygranch.com Wed Sep 28 19:58:33 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 28 Sep 2016 12:58:33 -0700 Subject: 444 return code and rate limiting In-Reply-To: References: <20160927171414.5468243.94818.11314@lazygranch.com> <20160927191247.5468243.85710.11322@lazygranch.com> <20160928173035.5468243.49920.11422@lazygranch.com> Message-ID: <20160928195833.5468243.67734.11442@lazygranch.com> An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 28 21:14:22 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Sep 2016 00:14:22 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: References: <20150413115739.GA88631@mdounin.ru> Message-ID: <20160928211422.GU73038@mdounin.ru> Hello! On Wed, Sep 28, 2016 at 12:44:45PM -0400, hotwirez wrote: [...] > I wanted to mention that I've run into this issue as well when trying to > enable OCSP stapling, where I have a default_deny SSL server that has a > self-signed certificate where I don't want to use OCSP stapling, and other > actual server entries for actual sites where I want OCSP stapling enabled. > If I enable stapling for only the real sites, it appears to be silently > disabled. If I enable it for all server instances, it notices that the > default server uses a self-signed certificate and disables stapling. If I > remove the default server, OCSP stapling works for the remaining sites, but > then accessing the site using a means other than the correct server name > provides the SSL certificate for one of the servers. Problems with OCSP stapling if it is disabled in the default server{} block were traced to be an OpenSSL bug, silently fixed in 1.0.0m/1.0.1g/1.0.2. See here for details: https://trac.nginx.org/nginx/ticket/810 If you see the problem it means you have to update the OpenSSL library you are using. -- Maxim Dounin http://nginx.org/ From lists at viaduct-productions.com Wed Sep 28 22:12:39 2016 From: lists at viaduct-productions.com (Viaduct Lists) Date: Wed, 28 Sep 2016 18:12:39 -0400 Subject: localhost on Passenger delayed serve Message-ID: <8512E659-C956-4654-80F9-D77B92C7A95B@viaduct-productions.com> Hi folks. A localhost passenger domain (hq.local) isn?t showing up on my local Safari. It?s pointed properly in the hosts file and Firefox takes about 10 seconds to show it, but Safari just waits for it. Curl also takes about 10 seconds to show the page, but it does show up using: curl ?basic hq.local This is the log entry I found during one request: 2016/09/28 17:59:27 [notice] 62#0: signal 1 (SIGHUP) received, reconfiguring 2016/09/28 17:59:27 [notice] 62#0: reconfiguring [ 2016-09-28 17:59:27.5342 65372/0x700000104000 age/Ust/UstRouterMain.cpp:422 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown) [ 2016-09-28 17:59:27.5342 65370/0x70000018a000 age/Cor/CoreMain.cpp:532 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown) [ 2016-09-28 17:59:27.5343 65370/0x7fff75dbf000 age/Cor/CoreMain.cpp:901 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected... [ 2016-09-28 17:59:27.5343 65372/0x7fff75dbf000 age/Ust/UstRouterMain.cpp:492 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected... [ 2016-09-28 17:59:27.5346 65372/0x700000104000 Ser/Server.h:464 ]: [UstRouter] Shutdown finished [ 2016-09-28 17:59:27.5346 65372/0x70000020a000 Ser/Server.h:816 ]: [UstRouterApiServer] Freed 0 spare client objects [ 2016-09-28 17:59:27.5347 65372/0x70000020a000 Ser/Server.h:464 ]: [UstRouterApiServer] Shutdown finished [ 2016-09-28 17:59:27.5348 65370/0x7000009ba000 Ser/Server.h:816 ]: [ApiServer] Freed 0 spare client objects [ 2016-09-28 17:59:27.5349 65370/0x7000009ba000 Ser/Server.h:464 ]: [ApiServer] Shutdown finished [ 2016-09-28 17:59:27.5352 65370/0x7000007ae000 Ser/Server.h:816 ]: [ServerThr.7] Freed 128 spare client objects [ 2016-09-28 17:59:27.5353 65370/0x7000007ae000 Ser/Server.h:464 ]: [ServerThr.7] Shutdown finished [ 2016-09-28 17:59:27.5352 65370/0x7000005a2000 Ser/Server.h:816 ]: [ServerThr.5] Freed 128 spare client objects [ 2016-09-28 17:59:27.5353 65370/0x7000008b4000 Ser/Server.h:816 ]: [ServerThr.8] Freed 128 spare client objects [ 2016-09-28 17:59:27.5353 65370/0x7000005a2000 Ser/Server.h:464 ]: [ServerThr.5] Shutdown finished [ 2016-09-28 17:59:27.5353 65370/0x70000018a000 Ser/Server.h:816 ]: [ServerThr.1] Freed 128 spare client objects [ 2016-09-28 17:59:27.5353 65370/0x7000008b4000 Ser/Server.h:464 ]: [ServerThr.8] Shutdown finished [ 2016-09-28 17:59:27.5352 65370/0x7000006a8000 Ser/Server.h:816 ]: [ServerThr.6] Freed 128 spare client objects [ 2016-09-28 17:59:27.5353 65370/0x70000049c000 Ser/Server.h:816 ]: [ServerThr.4] Freed 128 spare client objects [ 2016-09-28 17:59:27.5354 65370/0x7000006a8000 Ser/Server.h:464 ]: [ServerThr.6] Shutdown finished [ 2016-09-28 17:59:27.5355 65370/0x700000396000 Ser/Server.h:816 ]: [ServerThr.3] Freed 128 spare client objects [ 2016-09-28 17:59:27.5354 65370/0x700000290000 Ser/Server.h:816 ]: [ServerThr.2] Freed 128 spare client objects [ 2016-09-28 17:59:27.5355 65370/0x700000396000 Ser/Server.h:464 ]: [ServerThr.3] Shutdown finished [ 2016-09-28 17:59:27.5355 65370/0x700000290000 Ser/Server.h:464 ]: [ServerThr.2] Shutdown finished [ 2016-09-28 17:59:27.5354 65370/0x70000018a000 Ser/Server.h:464 ]: [ServerThr.1] Shutdown finished [ 2016-09-28 17:59:27.5355 65370/0x70000049c000 Ser/Server.h:464 ]: [ServerThr.4] Shutdown finished [ 2016-09-28 17:59:27.5363 65372/0x7fff75dbf000 age/Ust/UstRouterMain.cpp:523 ]: Passenger UstRouter shutdown finished 2016/09/28 17:59:27 [notice] 62#0: using the "kqueue" event method 2016/09/28 17:59:27 [warn] 62#0: 1024 worker_connections exceed open file resource limit: 256 [ 2016-09-28 17:59:27.5538 65528/0x7fff75dbf000 age/Wat/WatchdogMain.cpp:1291 ]: Starting Passenger watchdog... [ 2016-09-28 17:59:27.5800 65530/0x7fff75dbf000 age/Cor/CoreMain.cpp:982 ]: Starting Passenger core... [ 2016-09-28 17:59:27.5812 65530/0x7fff75dbf000 age/Cor/CoreMain.cpp:235 ]: Passenger core running in multi-application mode. [ 2016-09-28 17:59:27.5914 65530/0x7fff75dbf000 age/Cor/CoreMain.cpp:732 ]: Passenger core online, PID 65530 [ 2016-09-28 17:59:27.6043 65532/0x7fff75dbf000 age/Ust/UstRouterMain.cpp:529 ]: Starting Passenger UstRouter... [ 2016-09-28 17:59:27.6060 65532/0x7fff75dbf000 age/Ust/UstRouterMain.cpp:342 ]: Passenger UstRouter online, PID 65532 2016/09/28 17:59:27 [notice] 62#0: start worker processes 2016/09/28 17:59:27 [notice] 62#0: start worker process 65534 2016/09/28 17:59:27 [notice] 62#0: start worker process 65535 [ 2016-09-28 17:59:27.6540 65370/0x7fff75dbf000 age/Cor/CoreMain.cpp:967 ]: Passenger core shutdown finished 2016/09/28 17:59:27 [notice] 65374#0: gracefully shutting down 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 65375#0: gracefully shutting down 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 65374#0: exiting 2016/09/28 17:59:27 [notice] 62#0: signal 20 (SIGCHLD) received 2016/09/28 17:59:27 [notice] 65375#0: exiting 2016/09/28 17:59:27 [notice] 65374#0: exit 2016/09/28 17:59:27 [notice] 65375#0: exit 2016/09/28 17:59:27 [notice] 62#0: signal 20 (SIGCHLD) received 2016/09/28 17:59:27 [notice] 62#0: worker process 65374 exited with code 0 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 62#0: signal 20 (SIGCHLD) received 2016/09/28 17:59:27 [notice] 62#0: worker process 65375 exited with code 0 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received 2016/09/28 17:59:27 [notice] 62#0: signal 23 (SIGIO) received nginx test shows nothing unusual. The syntax has not changed in some time. Nothing has changed except the ruby files (app.rb and views). Then I reinstalled an archive that worked, and it simply won?t deliver the goods. Not sure what to do nor what to look at. Any advice appreciated. Cheers _____________ Rich in Toronto @ VP From lists at viaduct-productions.com Wed Sep 28 22:23:10 2016 From: lists at viaduct-productions.com (Viaduct Lists) Date: Wed, 28 Sep 2016 18:23:10 -0400 Subject: localhost on Passenger delayed serve In-Reply-To: <8512E659-C956-4654-80F9-D77B92C7A95B@viaduct-productions.com> References: <8512E659-C956-4654-80F9-D77B92C7A95B@viaduct-productions.com> Message-ID: Never mind. Solved. > On Sep 28, 2016, at 6:12 PM, Viaduct Lists wrote: > > Hi folks. A localhost passenger domain (hq.local) isn?t showing up on my local Safari. It?s pointed properly in the hosts file and Firefox takes about 10 seconds to show it, but Safari just waits for it. Curl also takes about 10 seconds to show the page, but it does show up using: > > curl ?basic hq.local _____________ Rich in Toronto @ VP From lists at lazygranch.com Wed Sep 28 22:59:49 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 28 Sep 2016 15:59:49 -0700 Subject: fake googlebots In-Reply-To: <20160925145833.076ee55e@linux-h57q.site> References: <20160925145833.076ee55e@linux-h57q.site> Message-ID: <20160928155949.6b72db39@linux-h57q.site> http://pastebin.com/tZZg3RbA/?e=1 This is the access.log file data relevant to that fake googlebot. It starts with a fake googlebot entry, then goes downhill from there. I rate limit at 10/s. I only allow the verbs HEAD and GET, so the POST went to 444 directly. I replaced the domain with a fake one. From emailgrant at gmail.com Wed Sep 28 23:49:20 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 28 Sep 2016 16:49:20 -0700 Subject: nginx reverse proxy causing TCP queuing spikes In-Reply-To: <0E696EE8-7810-4C7A-9A05-EBD3D38DA501@fearnothingproductions.net> References: <0E696EE8-7810-4C7A-9A05-EBD3D38DA501@fearnothingproductions.net> Message-ID: >> I do think this is related to 'proxy_read_timeout 60m;' leaving too >> many connections open. Can I somehow allow pages to load for up to >> 60m but not bog my server down with too many connections? > > Pardon me, but why on earth do you have an environment in which an HTTP request can take an hour? That seems like a serious abuse of the protocol. > > Keeping an HTTP request open means keeping the associated TCP connection open as well. If you have connections open for an hour, you're probably going to run into concurrency issues. I don't actually need 60m but I do need up to about 20m for some backend administrative processes. What is the right way to solve this problem? I don't think I can speed up the processes. - Grant From emailgrant at gmail.com Thu Sep 29 03:02:08 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 28 Sep 2016 20:02:08 -0700 Subject: location query string? Message-ID: Can I define a location block based on the value of a query string so I can set a longer timeout for certain form submissions even though all of my form submissions POST to the same URL? - Grant From rainer at ultra-secure.de Thu Sep 29 08:27:29 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Thu, 29 Sep 2016 10:27:29 +0200 Subject: proxy_pass generates double slash Message-ID: <4d206622598f16fbd025ded0b5f7273d@ultra-secure.de> Hi, I need to proxy a location and remove the location-URL at the same time. As I found out, this is achieved by adding a slash at the end of the proxy_pass directive. This works almost as intended, but it adds a double-slash to the beginning of the URL as it arrives on the other server. So, if /bla/hence/here/index.php is requested, what arrives on the other server is really //hence/here/index.php. location /bla { proxy_pass https://target:4444/; } ... Does anybody have a solution for this? I have another site where this is no problem (because the target is yet another nginx that just compacts the double-slashes (I assume). However, this time the target is a node.js app that I have no control over - and it obviously can't really make sense of this, resulting in a 404. Personally, I consider this a bug on the side of the node.js app - but I would hope there is a way to remove that double-slash on the nginx-side. Thanks in advance. From francis at daoine.org Thu Sep 29 08:45:44 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Sep 2016 09:45:44 +0100 Subject: proxy_pass generates double slash In-Reply-To: <4d206622598f16fbd025ded0b5f7273d@ultra-secure.de> References: <4d206622598f16fbd025ded0b5f7273d@ultra-secure.de> Message-ID: <20160929084544.GC11677@daoine.org> On Thu, Sep 29, 2016 at 10:27:29AM +0200, rainer at ultra-secure.de wrote: Hi there, > As I found out, this is achieved by adding a slash at the end of the > proxy_pass directive. That's not quite correct. The documentation is at http://nginx.org/r/proxy_pass You almost certainly want to have the same number of slashes at the end of your "location" and at the end of your "proxy_pass". > location /bla { Make that be "location /bla/" and you will probably be happier. > Personally, I consider this a bug on the side of the node.js app - I would say that that is arguable -- /one and //one are different urls. It may be common and "friendly" to handle them in the same way, but I don't think it should be required. GIGO applies, and it is less work for the server to decline to try to guess what the client means, when the client could send the correct information the first time. > but I would hope there is a way to remove that double-slash on the > nginx-side. Just don't add it in the first place, as above. Cheers, f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Thu Sep 29 12:30:28 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 29 Sep 2016 17:30:28 +0500 Subject: Open Socket issue !! Message-ID: Hi, We're facing following errors in logs, are they critical ? 2016/09/29 17:23:54 [alert] 9073#0: *190899046 open socket #21 left in connection 89 2016/09/29 17:23:54 [alert] 9073#0: *190585499 open socket #258 left in connection 112 2016/09/29 17:23:54 [alert] 9073#0: *190997493 open socket #197 left in connection 163 nginx -V nginx version: nginx/1.8.0 built with OpenSSL 1.0.1j-freebsd 15 Oct 2014 (running with OpenSSL 1.0.1l-freebsd 15 Jan 2015) TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_flv_module --with-http_geoip_module --with-http_mp4_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-pcre --with-http_spdy_module --with-http_ssl_module Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 29 12:35:15 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Thu, 29 Sep 2016 08:35:15 -0400 Subject: Open Socket issue !! In-Reply-To: References: Message-ID: <1657f64763f6be8547c5f7d1077c33c6.NginxMailingListEnglish@forum.nginx.org> This happens from time to time on my proxy, reload again and the message is gone. Heiko Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269951,269952#msg-269952 From nginx-forum at forum.nginx.org Thu Sep 29 13:17:32 2016 From: nginx-forum at forum.nginx.org (hotwirez) Date: Thu, 29 Sep 2016 09:17:32 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20160928211422.GU73038@mdounin.ru> References: <20160928211422.GU73038@mdounin.ru> Message-ID: <5268c4726a26c4c3ca04dfb09b88397b.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Sep 28, 2016 at 12:44:45PM -0400, hotwirez wrote: > > [...] > > > I wanted to mention that I've run into this issue as well when > trying to > > enable OCSP stapling, where I have a default_deny SSL server that > has a > > self-signed certificate where I don't want to use OCSP stapling, and > other > > actual server entries for actual sites where I want OCSP stapling > enabled. > > If I enable stapling for only the real sites, it appears to be > silently > > disabled. If I enable it for all server instances, it notices that > the > > default server uses a self-signed certificate and disables stapling. > If I > > remove the default server, OCSP stapling works for the remaining > sites, but > > then accessing the site using a means other than the correct server > name > > provides the SSL certificate for one of the servers. > > Problems with OCSP stapling if it is disabled in the default > server{} block were traced to be an OpenSSL bug, silently fixed in > 1.0.0m/1.0.1g/1.0.2. See here for details: > > https://trac.nginx.org/nginx/ticket/810 > > If you see the problem it means you have to update the OpenSSL > library you are using. > Thank you; it's great you tracked that down! I am on OpenSSL 1.0.1f and Nginx 1.4.6; (Ubuntu 14.04 via apt), so that makes sense. I'll upgrade. Thanks again! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,257833,269955#msg-269955 From reallfqq-nginx at yahoo.fr Thu Sep 29 17:00:04 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 29 Sep 2016 19:00:04 +0200 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <5268c4726a26c4c3ca04dfb09b88397b.NginxMailingListEnglish@forum.nginx.org> References: <20160928211422.GU73038@mdounin.ru> <5268c4726a26c4c3ca04dfb09b88397b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Considering your rather old version of nginx coming from Ubuntu packages, I suggest you use the lastest stable, officially available on nginx.org . Not related to your issue, but should not hurt (except with regressions ofc ;) ). --- *B. R.* On Thu, Sep 29, 2016 at 3:17 PM, hotwirez wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Wed, Sep 28, 2016 at 12:44:45PM -0400, hotwirez wrote: > > > > [...] > > > > > I wanted to mention that I've run into this issue as well when > > trying to > > > enable OCSP stapling, where I have a default_deny SSL server that > > has a > > > self-signed certificate where I don't want to use OCSP stapling, and > > other > > > actual server entries for actual sites where I want OCSP stapling > > enabled. > > > If I enable stapling for only the real sites, it appears to be > > silently > > > disabled. If I enable it for all server instances, it notices that > > the > > > default server uses a self-signed certificate and disables stapling. > > If I > > > remove the default server, OCSP stapling works for the remaining > > sites, but > > > then accessing the site using a means other than the correct server > > name > > > provides the SSL certificate for one of the servers. > > > > Problems with OCSP stapling if it is disabled in the default > > server{} block were traced to be an OpenSSL bug, silently fixed in > > 1.0.0m/1.0.1g/1.0.2. See here for details: > > > > https://trac.nginx.org/nginx/ticket/810 > > > > If you see the problem it means you have to update the OpenSSL > > library you are using. > > > Thank you; it's great you tracked that down! I am on OpenSSL 1.0.1f and > Nginx 1.4.6; (Ubuntu 14.04 via apt), so that makes sense. I'll upgrade. > > Thanks again! > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,257833,269955#msg-269955 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 29 17:09:47 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 29 Sep 2016 13:09:47 -0400 Subject: location /robots.txt conflict / issue Message-ID: <2618870504a27b3e9465e8ee1b054437.NginxMailingListEnglish@forum.nginx.org> So this is one of those issues it is most likely a bad configuration but my robots.txt file is returning a 404 because of another location because I am disallowing people to access any text files but I do want to allow only the robots.txt to be accessed. location /robots.txt { root 'location/to/robots/txt/file'; } #This is to stop people digging into any directories looking for files that only PHP etc should read or serve location ~* \.(txt|ini|xml|zip|rar)$ { return 404; } I have not tested but I believe I could fix the above by changing my config to this. location /robots.txt { allow all; root 'location/to/robots/txt/file'; } #This is to stop people digging into any directories looking for files that only PHP etc should read or serve location ~* \.(txt|ini|xml|zip|rar)$ { deny all; } Is that the only possible way to deny access to all text files apart from the robots.txt ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269960,269960#msg-269960 From yushang at outlook.com Thu Sep 29 18:32:14 2016 From: yushang at outlook.com (shang yu) Date: Thu, 29 Sep 2016 18:32:14 +0000 Subject: nginx multiprocess on Windows Message-ID: Hi dear all, I'm running nginx 1.10.1 on Windows , which is configured with 2 sub processes . I'm using ab to test the work load , found only one process is busy (cpu usage is high) but the other process is idle (cpu usage is zero). Why ? is it some kind of limit for nginx on Windows ? Many thanks !!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Sep 29 18:45:53 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 29 Sep 2016 21:45:53 +0300 Subject: nginx multiprocess on Windows In-Reply-To: References: Message-ID: <1524130.VMRjcTxCbR@vbart-workstation> On Thursday 29 September 2016 18:32:14 shang yu wrote: > Hi dear all, > > I'm running nginx 1.10.1 on Windows , which is configured with 2 sub processes . I'm using ab to test the work load , found only one process is busy (cpu usage is high) but the other process is idle (cpu usage is zero). Why ? is it some kind of limit for nginx on Windows ? Many thanks !!! http://nginx.org/en/docs/windows.html#known_issues wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Sep 29 19:11:51 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Thu, 29 Sep 2016 15:11:51 -0400 Subject: nginx multiprocess on Windows In-Reply-To: References: Message-ID: Try http://nginx-win.ecsds.eu/index.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269968,269970#msg-269970 From francis at daoine.org Thu Sep 29 21:23:04 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Sep 2016 22:23:04 +0100 Subject: location /robots.txt conflict / issue In-Reply-To: <2618870504a27b3e9465e8ee1b054437.NginxMailingListEnglish@forum.nginx.org> References: <2618870504a27b3e9465e8ee1b054437.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160929212304.GD11677@daoine.org> On Thu, Sep 29, 2016 at 01:09:47PM -0400, c0nw0nk wrote: Hi there, The docs are at http://nginx.org/r/location In this case, you want "=". > location /robots.txt { > location ~* \.(txt|ini|xml|zip|rar)$ { A request for /robots.txt will match the second location there. > location /robots.txt { > location ~* \.(txt|ini|xml|zip|rar)$ { A request for /robots.txt will still match the second location there. Change the first location to be location = /robots.txt and it should all Just Work. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 29 21:32:33 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Sep 2016 22:32:33 +0100 Subject: location query string? In-Reply-To: References: Message-ID: <20160929213233.GE11677@daoine.org> On Wed, Sep 28, 2016 at 08:02:08PM -0700, Grant wrote: Hi there, > Can I define a location block based on the value of a query string so No. A location match does not consider a query string. > I can set a longer timeout for certain form submissions even though > all of my form submissions POST to the same URL? I'm not quite sure what the specific problem you are seeing is, from the previous mails. Do you have a reproducible test case where you can see the problem? (And therefore, see if any suggested fix makes a difference?) http://nginx.org/r/proxy_read_timeout should (I think) matter between successive reads, and should not matter when the upstream cleanly closes the connection. Might the problem be related to your upstream not cleanly closing the connections? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 29 22:02:28 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Sep 2016 23:02:28 +0100 Subject: Static or dynamic content In-Reply-To: References: Message-ID: <20160929220228.GF11677@daoine.org> On Wed, Sep 28, 2016 at 11:01:36AM +0000, Jens Dueholm Christensen wrote: Hi there, > I've got an issue where nginx (see below for version/compile options) > returns a 405 (not allowed) to POST requests to clients when the upstream > proxy returns a 503. I don't see that, when I try to build a test system. Are you doing anything clever with error_page or anything like that? > My config is basicly this My config is exactly this: === http { server { listen 8080; location / { root /tmp; try_files /offline.html @xact; } location @xact { proxy_pass http://127.0.0.1:8082; } } server { listen 8082; return 503; } } === Do you see the problem when you test with that? (The file /tmp/offline.html does not exist.) If not, what part of your full config can you add to the 8080 server which causes the problem to appear? curl -i http://127.0.0.1:8080/x does a GET. curl -i -d k=v http://127.0.0.1:8080/x does a POST. (Use -v instead of -i for even more information.) > When the request method is a POST nginx transforms this to a 405 "not > allowed", which is - as far as I can understand from a number of posts > found googling - is to be expected when POSTing to a static resource. If you can build a repeatable test case -- either like the above, or perhaps configure a HAProxy instance that will return the 503 itself -- then it should be possible to see what is happening. > An individual backend instance can also return a 503 error if it > encounters a problem servicing a request. > > That 503 response is sent back to HAProxy, which parses it along to > nginx, which returns the 503 to the client (and it's NOT converted into > a 405!), so I know that nginx doesn't translate all 503s to 405s if it's > generated by a backend instance even if the request is POSTed. If you can see the bytes on the wire between HAProxy and nginx in both cases, the difference may be obvious. > This leads me to think that somehow HAProxy does not provide the > correct headers in the 503 errorpage to make nginx assume the response > from HAProxy isn't dynamic. I suspect that that is incorrect -- 503 is 503 -- but a config that allows the error be reproduced will be very helpful. Cheers, f -- Francis Daly francis at daoine.org From dewanggaba at xtremenitro.org Fri Sep 30 03:22:58 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 30 Sep 2016 10:22:58 +0700 Subject: ngx_brotli Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! Is there any best practice or some example to see how brotli compression works? I've patch the nginx using ngx_brotli[1], but I didn't see any brotli header, there's only gzip. Many thanks. [1] https://github.com/cloudflare/ngx_brotli_module -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJX7dqOGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcGSVD/41lF7vqx79KwIuWoaqUYZbGZ/XVByUHWdSnjHM/Ysd0mMU+Gyj uwmGfg4VifA56KO6hDBvptjmZ72Tg2NAz+6Sh7zQzFVuFHiv6gdM7/4nar7qNX8M BO1s7LHJsIybQUMwXHVa2tX/AUoaUsGCBgHidkeQU5Y2d0xFPKwNnET6un8vDu1/ 5wF/iV7rXChXs4rATzwpbEM4ujbIU/FmWJDriOxHMSDTBkyCm2VYVENUYekbA6nq 15zjVULwQwC/eBrr4wOJoQ8h4wCNVoA7OyZ4gzkmvyMEyOWXt3BUqb2bYKPosZSm HsihxD0Vls/P0fXhAjf5+wSeBcpbU1636Ci4UT6wiieCdvu5Z9EI1MJTQSsZ9RFW GgxVg83Qcuq+H7HzF45/ksf0DNArd+X3/y5UvJTwBB7tcngmu+6fR9cpElsafNyA FiYvYKeERT4lRRKZ5CAQYVHChIZBEt3iW+HbWDll3Wsa80Q4aYyZk9eeCbmsA/pd Nzw+xrYaKetw8iDu3onxGwZ+oELFPHCYia/ulsXGC1GoAEwv12TloucPhiUFa3Pv KsbQo1WQC0xyzZMQK/I3BASdPFyDozF8zMpHYa/+bidBMEUV4hVio6r/B23lQhcp bf1r3u1bAvGKuSTB/jiIXwFWy1f1++Gg+pweFBUQppZ+dSuBmzS5TPg0Kg== =DR41 -----END PGP SIGNATURE----- From JEDC at ramboll.com Fri Sep 30 09:56:50 2016 From: JEDC at ramboll.com (Jens Dueholm Christensen) Date: Fri, 30 Sep 2016 09:56:50 +0000 Subject: Static or dynamic content In-Reply-To: <20160929220228.GF11677@daoine.org> References: <20160929220228.GF11677@daoine.org> Message-ID: On Friday, September 30, 2016 12:02 AM Francis Daly wrote, >> I've got an issue where nginx (see below for version/compile options) >> returns a 405 (not allowed) to POST requests to clients when the upstream >> proxy returns a 503. > I don't see that, when I try to build a test system. > Are you doing anything clever with error_page or anything like that? No, I have an "error_page 503" and a similar one for 404 that points to two named locations, but that's it. > My config is exactly this: [SNIP] > Do you see the problem when you test with that? > (The file /tmp/offline.html does not exist.) Alas I will have to test, but I am unable to do so today and I am out of the office for most parts of next week, so I'll have to get back to you on that. >> When the request method is a POST nginx transforms this to a 405 "not >> allowed", which is - as far as I can understand from a number of posts >> found googling - is to be expected when POSTing to a static resource. > If you can build a repeatable test case -- either like the above, or > perhaps configure a HAProxy instance that will return the 503 itself -- > then it should be possible to see what is happening. Yes, that would be preferable, but I'd just like to have others comment on what I'm seeing before I went there. >> That 503 response is sent back to HAProxy, which parses it along to >> nginx, which returns the 503 to the client (and it's NOT converted into >> a 405!), so I know that nginx doesn't translate all 503s to 405s if it's >> generated by a backend instance even if the request is POSTed. > If you can see the bytes on the wire between HAProxy and nginx in both > cases, the difference may be obvious. Yes, but alas that involves tinkering with a production system and/or having tcpdump running just at the right point in time when all backends are down. Given the nature of this system and that we recieve anywhere from 5000 to 20.000+ hits per minute it could be close to impossible - but if it comes down to that I must find a way if I cannot reproduce it locally.. >> This leads me to think that somehow HAProxy does not provide the >> correct headers in the 503 errorpage to make nginx assume the response >> from HAProxy isn't dynamic. > I suspect that that is incorrect -- 503 is 503 -- but a config that > allows the error be reproduced will be very helpful. Oh yes - and given the fact that nginx as well as HAProxy is pretty darn good at handling http requests I know my idea is a bit far fetched, but from what I'm seeing I cannot come up with another good reason for this behaviour apart from moonbeams, unusual sunspot activity localized to this single nginx instance in my environment in my datacenter or leprechauns hunting gold - which quite frankly I've never before (and never will) accept as possible explanations for anything.. ;) Regards, Jens Dueholm Christensen From francis at daoine.org Fri Sep 30 10:54:50 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Sep 2016 11:54:50 +0100 Subject: Static or dynamic content In-Reply-To: References: <20160929220228.GF11677@daoine.org> Message-ID: <20160930105450.GG11677@daoine.org> On Fri, Sep 30, 2016 at 09:56:50AM +0000, Jens Dueholm Christensen wrote: > On Friday, September 30, 2016 12:02 AM Francis Daly wrote, Hi there, > >> I've got an issue where nginx (see below for version/compile options) > >> returns a 405 (not allowed) to POST requests to clients when the upstream > >> proxy returns a 503. > > > I don't see that, when I try to build a test system. > > > Are you doing anything clever with error_page or anything like that? > > No, I have an "error_page 503" and a similar one for 404 that points to two named locations, but that's it. That might matter. I can now get a 503, 404, or 405 result from nginx, when upstream sends a 503. Add error_page 503 @err; proxy_intercept_errors on; within the server{} block, and add location @err { root /tmp; } Now make /tmp/x exist, and /tmp/y not exist. A GET request for /x is proxied, gets a 503, and returns the content of /tmp/x with a 503 status. A GET request for /y is proxied, gets a 503, and returns a 404 status. A POST request for /x is proxied, gets a 503, and returns a 405 status. A POST request for /y is proxied, gets a 503, and returns a 404 status. Since you also have an error_page for 404, perhaps that does something that leads to the output that you see. I suspect that when you show your error_page config and the relevant locations, it may become clearer what you want to end up with. > > If you can see the bytes on the wire between HAProxy and nginx in both > > cases, the difference may be obvious. > > Yes, but alas that involves tinkering with a production system and/or having tcpdump running just at the right point in time when all backends are down. A test system which talks to a local HAProxy which has no "up" backends would probably be quicker to build. Good luck with it, f -- Francis Daly francis at daoine.org From nick at methodlab.info Fri Sep 30 11:08:16 2016 From: nick at methodlab.info (Nick Lavlinsky - Method Lab) Date: Fri, 30 Sep 2016 14:08:16 +0300 Subject: ngx_brotli In-Reply-To: References: Message-ID: <8be4a3d4-fbcd-f5ff-ce4f-a4e2a5d0a38c@methodlab.info> Hi! Just use Brotli-compatible browser (Chrome, Firefox) and see headers of the response in Developer tools. It should say: content-encoding: br By the way: it only works for HTTPS. 30.09.2016 06:22, Dewangga Bachrul Alam ?????: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hello! > > Is there any best practice or some example to see how brotli > compression works? I've patch the nginx using ngx_brotli[1], but I > didn't see any brotli header, there's only gzip. > > Many thanks. > > [1] https://github.com/cloudflare/ngx_brotli_module > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQI4BAEBCAAiBQJX7dqOGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl > f9IgoCjNcGSVD/41lF7vqx79KwIuWoaqUYZbGZ/XVByUHWdSnjHM/Ysd0mMU+Gyj > uwmGfg4VifA56KO6hDBvptjmZ72Tg2NAz+6Sh7zQzFVuFHiv6gdM7/4nar7qNX8M > BO1s7LHJsIybQUMwXHVa2tX/AUoaUsGCBgHidkeQU5Y2d0xFPKwNnET6un8vDu1/ > 5wF/iV7rXChXs4rATzwpbEM4ujbIU/FmWJDriOxHMSDTBkyCm2VYVENUYekbA6nq > 15zjVULwQwC/eBrr4wOJoQ8h4wCNVoA7OyZ4gzkmvyMEyOWXt3BUqb2bYKPosZSm > HsihxD0Vls/P0fXhAjf5+wSeBcpbU1636Ci4UT6wiieCdvu5Z9EI1MJTQSsZ9RFW > GgxVg83Qcuq+H7HzF45/ksf0DNArd+X3/y5UvJTwBB7tcngmu+6fR9cpElsafNyA > FiYvYKeERT4lRRKZ5CAQYVHChIZBEt3iW+HbWDll3Wsa80Q4aYyZk9eeCbmsA/pd > Nzw+xrYaKetw8iDu3onxGwZ+oELFPHCYia/ulsXGC1GoAEwv12TloucPhiUFa3Pv > KsbQo1WQC0xyzZMQK/I3BASdPFyDozF8zMpHYa/+bidBMEUV4hVio6r/B23lQhcp > bf1r3u1bAvGKuSTB/jiIXwFWy1f1++Gg+pweFBUQppZ+dSuBmzS5TPg0Kg== > =DR41 > -----END PGP SIGNATURE----- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- ? ?????????, ?????????? ???????, ????? ???: ?????? ?????????! www.methodlab.ru +7 (499) 519-00-12 From emailgrant at gmail.com Fri Sep 30 11:27:49 2016 From: emailgrant at gmail.com (Grant) Date: Fri, 30 Sep 2016 04:27:49 -0700 Subject: location query string? In-Reply-To: <20160929213233.GE11677@daoine.org> References: <20160929213233.GE11677@daoine.org> Message-ID: > I'm not quite sure what the specific problem you are seeing is, from > the previous mails. > > Do you have a reproducible test case where you can see the problem? (And > therefore, see if any suggested fix makes a difference?) > > http://nginx.org/r/proxy_read_timeout should (I think) matter between > successive reads, and should not matter when the upstream cleanly closes > the connection. Might the problem be related to your upstream not cleanly > closing the connections? It sure could be. Would this be a good way to monitor that possibility: netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n I could watch for the TIME_WAIT row getting too large. - Grant From emailgrant at gmail.com Fri Sep 30 16:16:16 2016 From: emailgrant at gmail.com (Grant) Date: Fri, 30 Sep 2016 09:16:16 -0700 Subject: proxy_set_header Connection ""; Message-ID: Does anyone know why this is required for upstream keepalive? - Grant From rpaprocki at fearnothingproductions.net Fri Sep 30 16:19:01 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 30 Sep 2016 09:19:01 -0700 Subject: proxy_set_header Connection ""; In-Reply-To: References: Message-ID: By default the Connection header is passed to the origin. If a client sends a request with Connection: close, Nginx would send this to the upstream, effectively disabling keepalive. By clearing this header, Nginx will not send it on to the upstream source, leaving it to send its own Connection header as appropriate. On Fri, Sep 30, 2016 at 9:16 AM, Grant wrote: > Does anyone know why this is required for upstream keepalive? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Fri Sep 30 16:24:36 2016 From: emailgrant at gmail.com (Grant) Date: Fri, 30 Sep 2016 09:24:36 -0700 Subject: proxy_set_header Connection ""; In-Reply-To: References: Message-ID: > By default the Connection header is passed to the origin. If a client sends > a request with Connection: close, Nginx would send this to the upstream, > effectively disabling keepalive. By clearing this header, Nginx will not > send it on to the upstream source, leaving it to send its own Connection > header as appropriate. That makes perfect sense. Is there a way to test if keepalive is active between nginx and the upstream server? - Grant >> Does anyone know why this is required for upstream keepalive? >> >> - Grant From rpaprocki at fearnothingproductions.net Fri Sep 30 16:27:30 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 30 Sep 2016 09:27:30 -0700 Subject: proxy_set_header Connection ""; In-Reply-To: References: Message-ID: On Fri, Sep 30, 2016 at 9:24 AM, Grant wrote: > > By default the Connection header is passed to the origin. If a client > sends > > a request with Connection: close, Nginx would send this to the upstream, > > effectively disabling keepalive. By clearing this header, Nginx will not > > send it on to the upstream source, leaving it to send its own Connection > > header as appropriate. > > > That makes perfect sense. Is there a way to test if keepalive is > active between nginx and the upstream server? > > - Grant > A few quick thoughts come to mind: - tcpdump traffic between nginx and upstream (this seems like the best way to verify given that keepalive behavior directly affects tcp traffic, so you want to examine this directly) - examine nginx debug logs to show what headers are being passed to / received from upstream (requires your nginx to be built with --with-debug) - query kernel for status of tcp connections while simulating traffic between nginx and upstream (eg netstat, ss, etc) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 30 19:34:54 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 30 Sep 2016 15:34:54 -0400 Subject: Nginx Proxy KeepAlive and FastCGI KeepAlive Message-ID: <6828e74c38a83dccc1af533cc283258d.NginxMailingListEnglish@forum.nginx.org> FastCGI : upstream fastcgi_backend { server 127.0.0.1:9000; keepalive 8; } server { ... location /fastcgi/ { fastcgi_pass fastcgi_backend; fastcgi_keep_conn on; ... } } Proxy : upstream http_backend { server 127.0.0.1:80; keepalive 16; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive So when keeping connections alive for back end processes / servers how is the keep alive number worked out should I just double it each time I add a new server into the backend's upstream. So if I have 100 fastcgi servers in my upstream its just 8x100=800 so my keepalive value should be keepalive 800; and the same for proxy_pass upstream 16x100=1600 keepalive 1600; Or that would be to much and it should not be calculated like this ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269997,269997#msg-269997 From nginx-forum at forum.nginx.org Fri Sep 30 19:51:43 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 30 Sep 2016 15:51:43 -0400 Subject: Nginx Proxy KeepAlive and FastCGI KeepAlive In-Reply-To: <6828e74c38a83dccc1af533cc283258d.NginxMailingListEnglish@forum.nginx.org> References: <6828e74c38a83dccc1af533cc283258d.NginxMailingListEnglish@forum.nginx.org> Message-ID: A backend is a pool (upstream) for which work is distributed to, the pool size has no bearing to keepalive. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269997,270000#msg-270000 From nginx-forum at forum.nginx.org Fri Sep 30 20:02:31 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 30 Sep 2016 16:02:31 -0400 Subject: Nginx Proxy KeepAlive and FastCGI KeepAlive In-Reply-To: References: <6828e74c38a83dccc1af533cc283258d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97ed7999f0ecf4420d6ac5716e582793.NginxMailingListEnglish@forum.nginx.org> Thanks :) I thought the more servers I have within my upstream location would mean I should also increase my keepalive to suit for best performance etc. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269997,270001#msg-270001