From francis at daoine.org Sat Apr 1 07:32:58 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 1 Apr 2017 08:32:58 +0100 Subject: curl 301 moved permanently if I don't use slash at the end of the url In-Reply-To: References: Message-ID: <20170401073258.GA3428@daoine.org> On Tue, Mar 28, 2017 at 12:56:13PM -0400, c4rl wrote: Hi there, > I need to list the content of some directories with curl without to use a > '/' at the end of the url. There's a relative-url reason why that is usually not a good idea, which is probably why the defaults for many web servers' "auto-index" feature do not do it, and instead issue the http 301. It is reasonable and possible to do the directory listing the way you want, but it is likely to include some extra coding (rather than just configuration) on your part. > If I do not use the slash then I receive the message below, otherwise the > content is showed. I tried many rewrite rules without success. > > [user at localhost ~]$ curl http://mydomain.example.com/data/foo The important part of that response is in the http headers, that you can see by (for example) adding "-i" after "curl". It will show the 301 response code, and the Location: header that includes the trailing /. > [user at localhost ~]$ curl http://mydomain.example.com/data/foo/ > > > Index of /data/foo/ > >

Index of /data/foo/


../
> 57581/                                            
> 12-Jul-2016 01:56                   -
> 57582/                                            
> 13-Jul-2016 01:55                   -
> 57583/                                            
> 14-Jul-2016 00:34                   -
> 

> You will have to write something that happens before nginx's in-built default for a request with no trailing / that corresponds to a directory name on the filesystem. If you want the response to be html usable by a client, then the output will not be identical to the above; instead the entries will have to be of the form 57583/ 14-Jul-2016 00:34 - (and you may or may not want one or both of the trailing / characters there.) > This is my vhost configuration: > location /data/foo { > alias /data/foo; > autoindex on; > } I suspect that you would need something like location = /data/foo { my_special_autoindex on; } where you must write the code behind "my_special_autoindex" yourself. Or, more likely, use something like "try_files" (possibly in a chain) to handle a generic url; if it does not end in / and does correspond to a directory, then invoke your extra code -- which might be handled by a fastcgi script you write, or by something you write in one of the embedded languages in nginx. It is possible that someone has already wanted the same thing as you, and written a module to do it. I'm not aware of that module. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 1 08:04:09 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 1 Apr 2017 09:04:09 +0100 Subject: HTTP To TCP Conversion In-Reply-To: <53963b88395836b0e4e13cd79bdd6f94.NginxMailingListEnglish@forum.nginx.org> References: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> <53963b88395836b0e4e13cd79bdd6f94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170401080409.GB3428@daoine.org> On Wed, Mar 29, 2017 at 12:44:09PM -0400, nginxsantos wrote: Hi there, > Can someone please guide me on how this can be done. I am quite familiar > with nginx code. If someone can guide me how this can be achieved (passing > the incoming traffic over tcp connection to tcp clients), I can pick up... Is there some part of the previously-mentioned nchan module design or source that is unclear or inappropriate here? I suspect that you may have a better chance of getting a specific answer, if you ask a specific question. Good luck with it, f -- Francis Daly francis at daoine.org From 451174619 at qq.com Sat Apr 1 10:52:29 2017 From: 451174619 at qq.com (=?gb18030?B?xq7Grs/j?=) Date: Sat, 1 Apr 2017 18:52:29 +0800 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Apr 1 11:36:41 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 1 Apr 2017 12:36:41 +0100 Subject: how to proxy a proxy (subrequest with corporate proxy) In-Reply-To: References: Message-ID: <20170401113641.GC3428@daoine.org> On Thu, Mar 30, 2017 at 06:13:59PM -0400, CeeGeeDev wrote: Hi there, > It's not clear to us how to configure a "web proxy" for a subrequest, since > the subrequest itself is already basically a "proxy" call. Stock nginx does not speak proxied-http to a http proxy. I suspect that the facility will only become available when someone wants it enough to write the code, or to cause the code to be written. If you have a config that works well-enough on your system (as in: your proxy server is configured in a transparent-like manner, and accepts http requests to itself and then reverse-proxies the world), then continuing to use that config is probably appropriate. That is: if > location /custom/main/request/url { > proxy_buffering off; > proxy_pass_header on; > proxy_set_header Host "www.origin-server.com"; > proxy_pass http://corporate_proxy; > } does everything that you want normally, then something like > location /subrequest { > proxy_buffering off; > proxy_pass_header on; > proxy_set_header Host "rest_server"; > proxy_pass http://corporate_proxy; > } may work for your subrequests. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 1 11:57:28 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 1 Apr 2017 12:57:28 +0100 Subject: Nginx cookie map regex remove + character In-Reply-To: <1f42674cd2c11aa7d9eed7cc33fb3887.NginxMailingListEnglish@forum.nginx.org> References: <1f42674cd2c11aa7d9eed7cc33fb3887.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170401115728.GD3428@daoine.org> On Fri, Mar 24, 2017 at 05:18:23PM -0400, c0nw0nk wrote: Hi there, > The cookie name = a MD5 sum the full / complete value of the cookie seems to > cut of at a plus + symbol Your regex piece is (?[\w]{1,}+) which says to match one or more \w characters ({1,}), one or more times (+) \w is "word character", which is alnum-or-underscore. > What would the correct regex to be to ignore / remove + symbols from > "session_value" If you want to match "word character or plus", use something like [\w+]. And then also probably remove one of "{1,}" and "+", since they mean the same thing and having both is redundant. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Apr 1 12:06:53 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Sat, 01 Apr 2017 08:06:53 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: We made a lot of tests, removing brotli, geoip, we can't solve the issue i suspect : nginx: cache manager process simple bug ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273303#msg-273303 From dynasticspace at gmail.com Sun Apr 2 06:42:00 2017 From: dynasticspace at gmail.com (Dynastic Space) Date: Sun, 2 Apr 2017 09:42:00 +0300 Subject: Configuring a subnet in an upstream server Message-ID: Is it possible to configure a collection of servers using subnet notation in the upstream server knob? e.g. server 192.168.0.0/16. Thanks, Dynastic -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexshe at wix.com Sun Apr 2 11:33:07 2017 From: alexshe at wix.com (Alex Shemshurenko) Date: Sun, 2 Apr 2017 14:33:07 +0300 Subject: Cashing packages for npm registry with NGINX Message-ID: My project is javascript app. I have lots of dependencies. I/O for npm registry takes major portion of CI execution. So my idea is to setup NGINX in front of npm registry and cache tgz files downloads. Im running Ubuntu 14.04. NGINX version is 1.4.6 This is my nginx configuration script user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on;} http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; # include /etc/nginx/sites-enabled/*; # HTTP 1.1 support proxy_http_version 1.1; proxy_buffering off; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $proxy_connection; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto; proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl; proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port; # If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the # scheme used to connect to this server map $http_x_forwarded_proto $proxy_x_forwarded_proto { default $http_x_forwarded_proto; '' $scheme; } # If we receive X-Forwarded-Port, pass it through; otherwise, pass along the # server port the client connected to map $http_x_forwarded_port $proxy_x_forwarded_port { default $http_x_forwarded_port; '' $server_port; } # If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any # Connection header that may have been passed to this server map $http_upgrade $proxy_connection { default upgrade; '' close; } # Set appropriate X-Forwarded-Ssl header map $scheme $proxy_x_forwarded_ssl { default off; https on; } server { listen 80 default_server; location / { access_log /var/log/nginx/root.log; root /var/tmp/nginx/npm; try_files $request_uri @fetch; } location @fetch { internal; proxy_pass http://nmregistry:4873$request_uri; proxy_store /var/tmp/nginx/npm$request_uri; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_store_access user:rw group:rw all:r; } } } It works, i can install packages but they are not cached on NGINX machine. Cant see any tgz files in /var/tmp/nginx/npm What am i doing wrong here? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Apr 2 16:35:36 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Sun, 02 Apr 2017 12:35:36 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: <8ba2a49360d927558c886494f23834a7.NginxMailingListEnglish@forum.nginx.org> After upgrade and recompilation, issue is much less important, it increase only by 0.17% for an antire stats processing cycle, but issue remain unsolved.... [root at web1 ~]# nginx -V nginx version: nginx/1.11.12 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with LibreSSL 2.5.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.40 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.1 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-release-1.11.33.4-beta --add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.1/gems/passenger-5.1.2/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E #Core Functionality user nobody; worker_processes 8; pid /var/run/nginx.pid; pcre_jit on; error_log /var/log/nginx/error_log; #error_log /home/abackup/debug.log debug; worker_rlimit_nofile 300000; #Load Dynamic Modules include /etc/nginx/modules.d/*.load; events { worker_connections 8192; use epoll; multi_accept on; accept_mutex off; } #Settings For other core modules like for example the stream module include /etc/nginx/conf.d/main_custom_include.conf; #Settings for the http core module include /etc/nginx/conf.d/http_settings_custom.conf; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273304#msg-273304 From nginx-forum at forum.nginx.org Mon Apr 3 05:09:14 2017 From: nginx-forum at forum.nginx.org (t.nishiyori) Date: Mon, 03 Apr 2017 01:09:14 -0400 Subject: slice module got error when contents of upstream was updated Message-ID: <4ef1fc0f6fa76c990ec96744742637fb.NginxMailingListEnglish@forum.nginx.org> Helle, I'm using nginx with slice module as a proxy. One day, I got an error log such like a "etag mismatch in slice response while reading response header from upstream". The cause of this error was occurred when that some parts of response was cached before updating the upstream contents but others was not cached. So it's not be solved until the cache period is expired. Do you have any solutions for this error? I hope the slice module change the status of related caches to disable, when the slice module got this error. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273307,273307#msg-273307 From nginx-forum at forum.nginx.org Mon Apr 3 06:15:01 2017 From: nginx-forum at forum.nginx.org (sv_91) Date: Mon, 03 Apr 2017 02:15:01 -0400 Subject: slow keep-alive with generic kernel In-Reply-To: References: Message-ID: <71d26618dda870c8d76560644ca44f98.NginxMailingListEnglish@forum.nginx.org> Here is a demo example https://gist.github.com/magisterRab/6b7132e0b9e88baa4b7e0e69a2ff0aab if in line 120 remove writeRequest(fd), then the speed of the test will fall 2 times Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273276,273308#msg-273308 From reallfqq-nginx at yahoo.fr Mon Apr 3 08:04:46 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 3 Apr 2017 10:04:46 +0200 Subject: Configuring a subnet in an upstream server In-Reply-To: References: Message-ID: What would be the meaning of that? How do you route traffic to 192.168.0.0? Do you really want to send requests to 192.168.255.255? How would you handle requests sent to some servers (but not all) if some are not responsive? I suspect what you want to use is dynamic IP addresses for your backends. Good news: you can use domain names.? --- *B. R.* On Sun, Apr 2, 2017 at 8:42 AM, Dynastic Space wrote: > Is it possible to configure a collection of servers using subnet notation > in the upstream server knob? e.g. server 192.168.0.0/16. > > Thanks, > > Dynastic > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Apr 3 08:12:04 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 3 Apr 2017 10:12:04 +0200 Subject: Nginx cookie map regex remove + character In-Reply-To: <20170401115728.GD3428@daoine.org> References: <1f42674cd2c11aa7d9eed7cc33fb3887.NginxMailingListEnglish@forum.nginx.org> <20170401115728.GD3428@daoine.org> Message-ID: On Sat, Apr 1, 2017 at 1:57 PM, Francis Daly wrote: > If you want to match "word character or plus", use something like [\w+]. > ?Defining a pattern over a simple assertion is kinda strange?. '[' & ']' are useless here, since you are not matching several symbols. Use (?\w+) and you should be all set. Btw, if you were to use '+', [\w+] and [\w]+ have different meaning: first quantifier applies to '\w' only while latter applies to all the symbols in the pattern. ?--- *B. R.*? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignatenko at estaxi.ru Mon Apr 3 08:17:19 2017 From: ignatenko at estaxi.ru (=?UTF-8?B?0JjQs9C90LDRgtC10L3QutC+INCc0LDQutGB0LjQvA==?=) Date: Mon, 3 Apr 2017 14:17:19 +0600 Subject: Nginx redirect preserving source hostname Message-ID: <73da404a-13ac-8123-f35d-3d52ad1d414b@estaxi.ru> I have an NGINX as reverse proxy with PHP-fpm. Nginx is set up for serving www.somehost.com. I added another host www.anotherhost.com. Now I need to setup redirect in this way: If user type www.anotherhost.com then it redirects to www.somehost.com/someurl, but url in browser bar shouldn't change. If I set up rewrite it works, but it rewrites url in browser too. |if ($host = "www.anotherhost.com") { rewrite ^ http://www.somehost.com/someurl; }| Is it possible to redirect preserving url ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Apr 3 10:36:52 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 3 Apr 2017 13:36:52 +0300 Subject: slice module got error when contents of upstream was updated In-Reply-To: <4ef1fc0f6fa76c990ec96744742637fb.NginxMailingListEnglish@forum.nginx.org> References: <4ef1fc0f6fa76c990ec96744742637fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170403103652.GA97890@Romans-MacBook-Air.local> Hi, On Mon, Apr 03, 2017 at 01:09:14AM -0400, t.nishiyori wrote: > Helle, > > I'm using nginx with slice module as a proxy. > > One day, I got an error log such like a "etag mismatch in slice response > while reading response header from upstream". > > The cause of this error was occurred when that some parts of response was > cached before updating the upstream contents but others was not cached. > So it's not be solved until the cache period is expired. > > Do you have any solutions for this error? > I hope the slice module change the status of related caches to disable, when > the slice module got this error. It's supposed that the proxied response never changes. If it does, the module aborts sending the response and produces the error you have mentioned. There's no simple way of making the output consistent for changeable responses without introducing a significant overhead. -- Roman Arutyunyan From nginx-forum at forum.nginx.org Mon Apr 3 13:21:10 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Mon, 03 Apr 2017 09:21:10 -0400 Subject: How to encrypt proxy cache Message-ID: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> Hi, We are testing using nginx as a file cache in front of our app, but the contents of the proxy cache directory are readable to any body who has access to the machine. Is there a way to encrypt the files stored in the proxy cache folder so that it' not exposed to the naked eye but nginx decrypts it on the fly before serving it to the user. Thanks Sachin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273311,273311#msg-273311 From rainer at ultra-secure.de Mon Apr 3 13:42:28 2017 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Mon, 03 Apr 2017 15:42:28 +0200 Subject: How to encrypt proxy cache In-Reply-To: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> References: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <02a961b2830ed7a5da4ef399fcf325d5@ultra-secure.de> Am 2017-04-03 15:21, schrieb sachin.shetty at gmail.com: > Hi, > > We are testing using nginx as a file cache in front of our app, but > the > contents of the proxy cache directory are readable to any body who has > access to the machine. Is there a way to encrypt the files stored in > the > proxy cache folder so that it' not exposed to the naked eye but nginx > decrypts it on the fly before serving it to the user. Run it on a machine that only authorized users have access to. Servers from HP let you encrypt the harddrives through the RAID-controller. From mdounin at mdounin.ru Mon Apr 3 14:04:21 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Apr 2017 17:04:21 +0300 Subject: How to encrypt proxy cache In-Reply-To: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> References: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170403140420.GX13617@mdounin.ru> Hello! On Mon, Apr 03, 2017 at 09:21:10AM -0400, sachin.shetty at gmail.com wrote: > We are testing using nginx as a file cache in front of our app, but the > contents of the proxy cache directory are readable to any body who has > access to the machine. Is there a way to encrypt the files stored in the > proxy cache folder so that it' not exposed to the naked eye but nginx > decrypts it on the fly before serving it to the user. Files in the proxy cache folder are protected using normal access control, nginx uses 0600 access mask for all cache files and directories. They aren't expected to be readable by anyone except nginx itself. This is believed to be enough to prevent any unauthorized access on software level. If you also want to protect data from attackers with physical access to the server, consider using disk encryption and/or filesystem-level encryption. It is not likely to solve the problem completely, but may help in some simple cases. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Apr 3 15:50:22 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Mon, 03 Apr 2017 11:50:22 -0400 Subject: How to encrypt proxy cache In-Reply-To: <20170403140420.GX13617@mdounin.ru> References: <20170403140420.GX13617@mdounin.ru> Message-ID: Thanks Maxim for the reply. We have evaluated disk based encryption etc, but that does not prevent sysadmins from viewing user data which is a problem for us. Do you think we could build something using lua and intercept read and wriite call from cache? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273311,273354#msg-273354 From rainer at ultra-secure.de Mon Apr 3 15:57:23 2017 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Mon, 03 Apr 2017 17:57:23 +0200 Subject: How to encrypt proxy cache In-Reply-To: References: <20170403140420.GX13617@mdounin.ru> Message-ID: <3bcdb813a81e34c5aaf7a432b5ae4d4c@ultra-secure.de> Am 2017-04-03 17:50, schrieb sachin.shetty at gmail.com: > Thanks Maxim for the reply. We have evaluated disk based encryption > etc, but > that does not prevent sysadmins from viewing user data which is a > problem > for us. Then you should put your servers someplace where you trust your the sysadmins. Because ultimately, you will have to. They could just replace the lua-script with something that makes an unencrypted copy to some other place, couldn't they? From dynasticspace at gmail.com Mon Apr 3 16:09:12 2017 From: dynasticspace at gmail.com (Dynastic Space) Date: Mon, 3 Apr 2017 19:09:12 +0300 Subject: Configuring a subnet in an upstream server In-Reply-To: References: Message-ID: I used a poor example. The functionality I was interested in was adding a range of application servers, all part of the same domain. D On Mon, Apr 3, 2017 at 11:04 AM, B.R. via nginx wrote: > What would be the meaning of that? > > How do you route traffic to 192.168.0.0? Do you really want to send > requests to 192.168.255.255? > How would you handle requests sent to some servers (but not all) if some > are not responsive? > > I suspect what you want to use is dynamic IP addresses for your backends. > Good news: you can use domain names.? > --- > *B. R.* > > On Sun, Apr 2, 2017 at 8:42 AM, Dynastic Space > wrote: > >> Is it possible to configure a collection of servers using subnet notation >> in the upstream server knob? e.g. server 192.168.0.0/16. >> >> Thanks, >> >> Dynastic >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Mon Apr 3 16:14:59 2017 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 03 Apr 2017 17:14:59 +0100 Subject: How to encrypt proxy cache In-Reply-To: References: <20170403140420.GX13617@mdounin.ru> Message-ID: <70400adc98e893e43ced9f283b8bd044@swsystem.co.uk> On 03/04/2017 16:50, sachin.shetty at gmail.com wrote: > Thanks Maxim for the reply. We have evaluated disk based encryption > etc, but > that does not prevent sysadmins from viewing user data which is a > problem > for us. > > Do you think we could build something using lua and intercept read and > wriite call from cache? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,273311,273354#msg-273354 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx With root level access I doubt you'll be able to meet your requirements. There's tools like ssldump which can be used to decrypt the network traffic, even implementing something via a module/lua would require the encryption key to be read and available for the sysadmins to use. Personally I'd look at avoiding caching if it's got sensitive data by identifying common request data (paths/cookies etc) and excluding from the cache. Alternatively, as Maxim has said, review and restrict access to the server. Steve. From dewanggaba at xtremenitro.org Mon Apr 3 16:42:45 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 3 Apr 2017 23:42:45 +0700 Subject: How to encrypt proxy cache In-Reply-To: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> References: <828054c9fa2636011d2975d783306112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5ea92758-9d76-7fb8-dc2e-3fa8af5ab302@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 04/03/2017 08:21 PM, sachin.shetty at gmail.com wrote: > Hi, > > We are testing using nginx as a file cache in front of our app, > but the contents of the proxy cache directory are readable to any > body who has access to the machine. [..] > Is there a way to encrypt the files stored in the proxy cache > folder so that it' not exposed to the naked eye but nginx decrypts > it on the fly before serving it to the user. I didn't get your expectations and why you doing this. Then if the content can publicly visible over nginx, why the sysadmin or other trying to break the proxy cache directly? I think they (your engineer) will call the URL directly over the browser. (eg. http://localhost/this/should/be/secret) isn't it? If your expectations is encrypt/encode the url and should be visible to authorized user only, it's more possible. > > Thanks Sachin > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,273311,273311#msg-273311 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJY4nuBGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcAxNEACatmZrvqDVQ2NGCfZ/T73qcPnTe4daLSkXo8/3VPdhTPDdilzx Q6HWTWxarkbt2WX9/3F/Xpmp244UGUIySdGS2jMrC7dcx7mJD/Ts0G13XZk/X9aS S1bbOUpuCMCkypaENkQAe7nubYiFZfeB3uANfhiOrJOa7DUN7QQJCDZzubBd/FV/ eRz5tfV7z0wI5PkwSn670lq/lTiaMamkjEEvOMFz2O7gA0+jl+OxC2wM9NJRTmkl Z1NamP5CgYUmcXdx7AS0hBh7UFdAV+Nk/qfs1PtyXqssmbm9dZ869Ppixh84k+3i foG1zMdMUHZIFdnLjEtcEruzv4fLuQCgqEmWRA5FGylQ0v3zBjfst6aj06s1R5Dk dwdIsNTB2F4tYsL4fusmdbrFY+/4igsuDtEZpF0KTyKGPquDUxN7MZRN4D5OoBhn U6i6d2qr5iE4Xs/2E1rOtEIEiZYZTHdCXj9xE5+hJjVuYb6V2G6XTLMF8ToYLNlF VBpewPJ3IkLyRbiRw9ssqank/al//BICXa/dklDcUM2N4F57gXdvuFn10u22DYhr 8Dz2NKFM0YftVcNmqJ8yNit5WUQEr1fhX4GWrHhle0GTlz19lrsuc/QsQEqrHVUU oDBahO35+B7ifXlrwY7Rpy3EjmvyYws/422pH3y2fsZBbHB6oJJEGsrpuw== =hU4o -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Mon Apr 3 17:56:24 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 03 Apr 2017 13:56:24 -0400 Subject: Nginx upstream server certificate verification In-Reply-To: References: Message-ID: Thank Sergey, for you response. I have one more question. If I have multiple upstream server host names in the upstream server block, then how can I specify the specific upstream server host name to which the request is being proxied, in the proxy_ssl_name directive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273295,273355#msg-273355 From vukomir at ianculov.ro Mon Apr 3 19:57:55 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Mon, 3 Apr 2017 21:57:55 +0200 (CEST) Subject: fastcgi_pass and http upstream In-Reply-To: <472601813.102.1490887692149.JavaMail.vukomir@DESKTOP-9I7P6HN> References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> <472601813.102.1490887692149.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: <540425890.43.1491249475361.JavaMail.vukomir@DESKTOP-9I7P6HN> cab someone please help me with an example? ----- Original Message ----- From: "Vucomir Ianculov via nginx" To: "Yuriy Medvedev" Cc: "Vucomir Ianculov" , nginx at nginx.org Sent: Thursday, March 30, 2017 6:28:16 PM Subject: Re: fastcgi_pass and http upstream Hi i have search on the documentation but i was not able to find it, can you please give me an example on who it's done? Thanks. ----- Original Message ----- From: "Yuriy Medvedev" To: nginx at nginx.org Cc: "Vucomir Ianculov" Sent: Wednesday, March 22, 2017 9:57:40 PM Subject: Re: fastcgi_pass and http upstream Yes, it is possible. Please read the documentation. 22 ????? 2017 ?. 22:47 ???????????? "Vucomir Ianculov via nginx" < nginx at nginx.org > ???????: corrected question i have a situation where i have 2 Apache backed server and 2 php-fpm backed server. i would like to setup up a upstream to include all 4 back-ends or somehow to balance the requestion on all 4 back-ends back-ends. is it possible? Thanks. Br, Vuko From: "Vucomir Ianculov via nginx" < nginx at nginx.org > To: "nginx" < nginx at nginx.org > Cc: "Vucomir Ianculov" < vukomir at ianculov.ro > Sent: Wednesday, March 22, 2017 9:37:04 PM Subject: fastcgi_pass and http upstream Hi, is is possible to have a upstream that contains point to http and phpfpm? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Apr 3 22:32:39 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Apr 2017 23:32:39 +0100 Subject: fastcgi_pass and http upstream In-Reply-To: <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: <20170403223239.GE3428@daoine.org> On Wed, Mar 22, 2017 at 08:47:00PM +0100, Vucomir Ianculov via nginx wrote: Hi there, > i have a situation where i have 2 Apache backed server and 2 php-fpm backed server. i would like to setup up a upstream to include all 4 back-ends or somehow to balance the requestion on all 4 back-ends back-ends. > is it possible? You have two http servers, and two fastcgi servers; and you want nginx to handle a single request by using the appropriate one of proxy_pass and fastcgi_pass, with telling nginx which one to use? I do not think that it can be done. You can share certain requests among the http servers; and you can share certain other requests among the fastcgi servers; but I believe you can't have a single incoming request balanced across different protocols in stock nginx. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Apr 3 22:37:20 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Apr 2017 23:37:20 +0100 Subject: Nginx cookie map regex remove + character In-Reply-To: References: <1f42674cd2c11aa7d9eed7cc33fb3887.NginxMailingListEnglish@forum.nginx.org> <20170401115728.GD3428@daoine.org> Message-ID: <20170403223720.GF3428@daoine.org> On Mon, Apr 03, 2017 at 10:12:04AM +0200, B.R. via nginx wrote: > On Sat, Apr 1, 2017 at 1:57 PM, Francis Daly wrote: > > > If you want to match "word character or plus", use something like [\w+]. > > > > ?Defining a pattern over a simple assertion is kinda strange?. '[' & ']' > are useless here, since you are not matching several symbols. I think we may be reading the original question differently. I read it that the current regex matches one or more letter-or-number, and what is wanted is something to match one or more letter-or-number-or-plus. > Use (?\w+) and you should be all set. > > Btw, if you were to use '+', [\w+] and [\w]+ have different meaning: first > quantifier applies to '\w' only while latter applies to all the symbols in > the pattern. The first + is not a quantifier. At least, in the regex engine I use. f -- Francis Daly francis at daoine.org From marc at soda.fm Tue Apr 4 02:04:27 2017 From: marc at soda.fm (Marc Soda) Date: Mon, 3 Apr 2017 22:04:27 -0400 Subject: Binary upgrade with systemd Message-ID: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> Hello, I?m using nginx 1.10.3 custom built on Ubuntu 16.04. I?m also using the recommended systemd service file: [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target I?m try to do a no downtime upgrade with the USR2 and WINCH signals. Here is my process list before: root 32277 0.0 0.4 1056672 71148 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process and here it is after sending USR2: root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process root 32461 5.5 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process Notice how the new master is a child of the old master. If I send a WINCH I get: root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx root 32461 0.2 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process which is not what I?m looking for. Is this a limitation when running with systemd? Thanks, Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Apr 4 02:42:38 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Mon, 03 Apr 2017 22:42:38 -0400 Subject: How to encrypt proxy cache In-Reply-To: <5ea92758-9d76-7fb8-dc2e-3fa8af5ab302@xtremenitro.org> References: <5ea92758-9d76-7fb8-dc2e-3fa8af5ab302@xtremenitro.org> Message-ID: Hi, The information is not publicly available, it is protected by authentication, we have an auth plugin which makes sure auth happens before the request is routed to this cache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273311,273363#msg-273363 From dritto77 at gmail.com Tue Apr 4 07:31:19 2017 From: dritto77 at gmail.com (Jagannath Naidu) Date: Tue, 4 Apr 2017 13:01:19 +0530 Subject: Nginx map module regex in file Message-ID: Hi, I am trying to redirect some urls to a different document path. My configuration file is as follows ############ /etc/nginx/conf.d/site.conf ############################ *map_hash_max_size 2048;* *map_hash_bucket_size 128;* *map $uri $new {* * include list_4;* *}* resolver 127.0.0.1; server { listen 81; server_name abcexample.com; access_log /var/log/nginx/abcexample-access.log main; error_log /var/log/nginx/abcexample-error.log; location / { * if ($new) {* * rewrite ^ $new redirect;* * }* proxy_pass http://127.0.0.1:8000; } ################# /etc/nginx/list_4 ############################## /abc/1.html /abc/hello; /max/1.html /max/; ~^/xyz/(?.*)$ /xyz/123; *~^/kkkk/abcdef(?.*)$ /tttt/bbbbb/jjjj$abc;* *~^/kaka/(?.*)$ /tata/$abc;* Note: line 1,2 and 3 redirects are working fine. But line 4 and 5 are not working. *root at Hell1:~# curl -I abcexample.com/kkkk/abcef111.html * HTTP/1.1 302 Moved Temporarily Server: nginx Date: Tue, 04 Apr 2017 07:08:47 GMT Content-Type: text/html Content-Length: 154 *Location: http://abcdexample.com/tttt/bbbbb/jjjj$abc * Connection: keep-alive My Question is: What changes do I have to do in list_4 file to get results as follows *Location: http://abcdexample.com/news/bbbbb/jjjj111.html * Thanks in advance - Jagan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Tue Apr 4 08:33:13 2017 From: mailinglist at unix-solution.de (basti) Date: Tue, 4 Apr 2017 10:33:13 +0200 Subject: Allow /.well-known/acme-challenge but deny dot files Message-ID: Hello, at the Moment I use this config # Deny access to all .invisible files. location ~ /\. { deny all; access_log off; log_not_found off; } Now I need access to Let's Encrypt acme-challenge and add this to my config before deny all .invisible files, now it looks like ... # Allow Let's Encrypt acme-challenge location /.well-known/acme-challenge { allow all; access_log on; } # Deny access to all .invisible files. location ~ /\. { deny all; access_log off; log_not_found off; } ... I have reload nginx but I have no access to http://example.com/.well-known/acme-challenge Log say "access forbidden by rule." Is there a way to allow /.well-known/ and deny all other? Best Regards, basti From martin at martin-wolfert.de Tue Apr 4 08:36:05 2017 From: martin at martin-wolfert.de (Martin Wolfert) Date: Tue, 4 Apr 2017 10:36:05 +0200 Subject: Allow /.well-known/acme-challenge but deny dot files In-Reply-To: References: Message-ID: <209e8cba-b001-6588-7994-163b17a07be1@martin-wolfert.de> Hi, try this: # Allow access to the letsencrypt ACME Challenge location ~ /\.well-known\/acme-challenge { allow all; } Best, Martin Am 04.04.2017 um 10:33 schrieb basti: > Hello, > > at the Moment I use this config > > # Deny access to all .invisible files. > location ~ /\. { deny all; access_log off; log_not_found off; } > > > Now I need access to Let's Encrypt acme-challenge and add this to my > config before deny all .invisible files, now it looks like > > ... > # Allow Let's Encrypt acme-challenge > location /.well-known/acme-challenge { allow all; access_log on; } > > # Deny access to all .invisible files. > location ~ /\. { deny all; access_log off; log_not_found off; } > ... > > I have reload nginx but I have no access to > http://example.com/.well-known/acme-challenge > > Log say "access forbidden by rule." > Is there a way to allow /.well-known/ and deny all other? > > Best Regards, > basti > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lucas at lucasrolff.com Tue Apr 4 08:43:48 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Tue, 4 Apr 2017 08:43:48 +0000 Subject: Binary upgrade with systemd In-Reply-To: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> References: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> Message-ID: <0C53A9B0-9050-4128-9B0E-094920A12D75@lucasrolff.com> Hello Marc, For which PID do you send the WINCH signal? From: nginx > on behalf of Marc Soda > Reply-To: "nginx at nginx.org" > Date: Tuesday, 4 April 2017 at 04.04 To: "nginx at nginx.org" > Subject: Binary upgrade with systemd Hello, I?m using nginx 1.10.3 custom built on Ubuntu 16.04. I?m also using the recommended systemd service file: [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target I?m try to do a no downtime upgrade with the USR2 and WINCH signals. Here is my process list before: root 32277 0.0 0.4 1056672 71148 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process and here it is after sending USR2: root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process root 32461 5.5 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process Notice how the new master is a child of the old master. If I send a WINCH I get: root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx root 32461 0.2 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process which is not what I?m looking for. Is this a limitation when running with systemd? Thanks, Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Tue Apr 4 08:45:28 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 4 Apr 2017 14:15:28 +0530 Subject: Allow /.well-known/acme-challenge but deny dot files In-Reply-To: <209e8cba-b001-6588-7994-163b17a07be1@martin-wolfert.de> References: <209e8cba-b001-6588-7994-163b17a07be1@martin-wolfert.de> Message-ID: You can put it above the other deny location # Allow "Well-Known URIs" as per RFC 5785 location ~* ^/.well-known/ { allow all; } On Tue, Apr 4, 2017 at 2:06 PM, Martin Wolfert wrote: > Hi, > > try this: > > # Allow access to the letsencrypt ACME Challenge > location ~ /\.well-known\/acme-challenge { > allow all; > } > > Best, > Martin > > > > Am 04.04.2017 um 10:33 schrieb basti: > >> Hello, >> >> at the Moment I use this config >> >> # Deny access to all .invisible files. >> location ~ /\. { deny all; access_log off; log_not_found off; } >> >> >> Now I need access to Let's Encrypt acme-challenge and add this to my >> config before deny all .invisible files, now it looks like >> >> ... >> # Allow Let's Encrypt acme-challenge >> location /.well-known/acme-challenge { allow all; access_log on; } >> >> # Deny access to all .invisible files. >> location ~ /\. { deny all; access_log off; log_not_found off; } >> ... >> >> I have reload nginx but I have no access to >> http://example.com/.well-known/acme-challenge >> >> Log say "access forbidden by rule." >> Is there a way to allow /.well-known/ and deny all other? >> >> Best Regards, >> basti >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Tue Apr 4 08:53:17 2017 From: me at myconan.net (nanaya) Date: Tue, 04 Apr 2017 17:53:17 +0900 Subject: Allow /.well-known/acme-challenge but deny dot files Message-ID: <1491295997.255390.933478024.36571118@webmail.messagingengine.com> Hi, On Tue, Apr 4, 2017, at 17:45, Anoop Alias wrote: > You can put it above the other deny location > # Allow "Well-Known URIs" as per RFC 5785 > location ~* ^/.well-known/ { > allow all; > } > Or use "^~" because it's of higher precedence compared to "~". > If the longest matching prefix location has the ?^~? modifier then regular expressions are not checked. http://nginx.org/r/location location ^~ /.well-known/ { } From shahzaib.cb at gmail.com Tue Apr 4 11:24:48 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Tue, 4 Apr 2017 16:24:48 +0500 Subject: No referrer header on leacher's site !! Message-ID: Hi, We came across a website who is playing our video links remotely. Since we've hotlinking protection enabled based on referrer headers so i checked the request header by playing that video & found out that *referrer header was missing* in the browser's requests header tab. Then to generate same issue on our end, i statically added the video link in player on different domain & tried to play that video remotely which was successfully forbidden & browser *had referrer header *as well. Please have a note that he didn't embedded the video from our website, he's putting direct mp4 links & they are being played without any referrer header in the requests. Thanks for your help in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Tue Apr 4 11:32:06 2017 From: me at myconan.net (nanaya) Date: Tue, 04 Apr 2017 20:32:06 +0900 Subject: No referrer header on leacher's site !! In-Reply-To: References: Message-ID: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> Hi, On Tue, Apr 4, 2017, at 20:24, shahzaib mushtaq wrote: > Hi, > > We came across a website who is playing our video links remotely. Since > we've hotlinking protection enabled based on referrer headers so i > checked > the request header by playing that video & found out that *referrer > header > was missing* in the browser's requests header tab. > If your site isn't https but his site is, some browsers by default don't send referrer header. There are also various other referrer policies with varying level of support: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy http://caniuse.com/#search=referrer%20policy From shahzaib.cb at gmail.com Tue Apr 4 11:39:23 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Tue, 4 Apr 2017 16:39:23 +0500 Subject: No referrer header on leacher's site !! In-Reply-To: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> References: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> Message-ID: Hi, Thanks for quick response. Well its reverse, he's putting our HTTPS video link on his HTTP website. Could that create issue as well? If yes, what's the fix of it. Again thanks for your help. On Tue, Apr 4, 2017 at 4:32 PM, nanaya wrote: > Hi, > > On Tue, Apr 4, 2017, at 20:24, shahzaib mushtaq wrote: > > Hi, > > > > We came across a website who is playing our video links remotely. Since > > we've hotlinking protection enabled based on referrer headers so i > > checked > > the request header by playing that video & found out that *referrer > > header > > was missing* in the browser's requests header tab. > > > > If your site isn't https but his site is, some browsers by default don't > send referrer header. There are also various other referrer policies > with varying level of support: > > https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy > > http://caniuse.com/#search=referrer%20policy > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc at soda.fm Tue Apr 4 11:41:00 2017 From: marc at soda.fm (Marc Soda) Date: Tue, 4 Apr 2017 07:41:00 -0400 Subject: Binary upgrade with systemd In-Reply-To: <0C53A9B0-9050-4128-9B0E-094920A12D75@lucasrolff.com> References: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> <0C53A9B0-9050-4128-9B0E-094920A12D75@lucasrolff.com> Message-ID: <878FBB66-318B-458B-9792-90C2767518C6@soda.fm> I sent WINCH to the old master. In this case 32277. After sending WINCH, I can send QUIT to the old master and it exits. Everything looks fine at that point. But it seems a little odd to have to do this. > On Apr 4, 2017, at 4:43 AM, Lucas Rolff wrote: > > Hello Marc, > > For which PID do you send the WINCH signal? > > > From: nginx > on behalf of Marc Soda > > Reply-To: "nginx at nginx.org " > > Date: Tuesday, 4 April 2017 at 04.04 > To: "nginx at nginx.org " > > Subject: Binary upgrade with systemd > > Hello, > > I?m using nginx 1.10.3 custom built on Ubuntu 16.04. I?m also using the recommended systemd service file: > > [Unit] > Description=The NGINX HTTP and reverse proxy server > After=syslog.target network.target remote-fs.target nss-lookup.target > > [Service] > Type=forking > PIDFile=/run/nginx.pid > ExecStartPre=/usr/sbin/nginx -t > ExecStart=/usr/sbin/nginx > ExecReload=/bin/kill -s HUP $MAINPID > ExecStop=/bin/kill -s QUIT $MAINPID > PrivateTmp=true > > [Install] > WantedBy=multi-user.target > > I?m try to do a no downtime upgrade with the USR2 and WINCH signals. Here is my process list before: > > root 32277 0.0 0.4 1056672 71148 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx > www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process > > and here it is after sending USR2: > > root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx > www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process > www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process > root 32461 5.5 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx > www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process > www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process > > Notice how the new master is a child of the old master. If I send a WINCH I get: > > root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx > root 32461 0.2 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx > www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process > www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process > www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process > > which is not what I?m looking for. Is this a limitation when running with systemd? > > Thanks, > Marc > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Tue Apr 4 11:45:46 2017 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 04 Apr 2017 13:45:46 +0200 Subject: Binary upgrade with systemd In-Reply-To: <878FBB66-318B-458B-9792-90C2767518C6@soda.fm> References: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> <0C53A9B0-9050-4128-9B0E-094920A12D75@lucasrolff.com> <878FBB66-318B-458B-9792-90C2767518C6@soda.fm> Message-ID: <58E3876A.5030801@slcoding.com> According to the documentation: http://nginx.org/en/docs/control.html#upgrade You'd have to send the QUIT signal to finish off upgrading (replacing) the binary during runtime. Marc Soda wrote: > I sent WINCH to the old master. In this case 32277. > > After sending WINCH, I can send QUIT to the old master and it exits. > Everything looks fine at that point. But it seems a little odd to > have to do this. > >> On Apr 4, 2017, at 4:43 AM, Lucas Rolff > > wrote: >> >> Hello Marc, >> >> For which PID do you send the WINCH signal? >> >> >> From: nginx > > on behalf of Marc Soda >> > >> Reply-To: "nginx at nginx.org " > > >> Date: Tuesday, 4 April 2017 at 04.04 >> To: "nginx at nginx.org " > > >> Subject: Binary upgrade with systemd >> >> Hello, >> >> I?m using nginx 1.10.3 custom built on Ubuntu 16.04. I?m also >> using the recommended systemd service file: >> >> [Unit] >> Description=The NGINX HTTP and reverse proxy server >> After=syslog.target network.target remote-fs.target nss-lookup.target >> >> [Service] >> Type=forking >> PIDFile=/run/nginx.pid >> ExecStartPre=/usr/sbin/nginx -t >> ExecStart=/usr/sbin/nginx >> ExecReload=/bin/kill -s HUP $MAINPID >> ExecStop=/bin/kill -s QUIT $MAINPID >> PrivateTmp=true >> >> [Install] >> WantedBy=multi-user.target >> >> I?m try to do a no downtime upgrade with the USR2 and WINCH >> signals. Here is my process list before: >> >> root 32277 0.0 0.4 1056672 71148 ? Ss 21:51 0:00 >> nginx: master process /usr/local/nginx/sbin/nginx >> www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 >> \_ nginx: cache manager process >> >> and here it is after sending USR2: >> >> root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 >> nginx: master process /usr/local/nginx/sbin/nginx >> www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 >> \_ nginx: worker process >> www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 >> \_ nginx: cache manager process >> root 32461 5.5 0.5 1056676 82316 ? S 22:01 0:00 >> \_ nginx: master process /usr/local/nginx/sbin/nginx >> www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 >> \_ nginx: cache manager process >> www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 >> \_ nginx: cache loader process >> >> Notice how the new master is a child of the old master. If I >> send a WINCH I get: >> >> root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 >> nginx: master process /usr/local/nginx/sbin/nginx >> root 32461 0.2 0.5 1056676 82316 ? S 22:01 0:00 >> \_ nginx: master process /usr/local/nginx/sbin/nginx >> www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 >> \_ nginx: worker process >> www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 >> \_ nginx: cache manager process >> www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 >> \_ nginx: cache loader process >> >> which is not what I?m looking for. Is this a limitation when >> running with systemd? >> >> Thanks, >> Marc >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc at soda.fm Tue Apr 4 11:46:36 2017 From: marc at soda.fm (Marc Soda) Date: Tue, 4 Apr 2017 07:46:36 -0400 Subject: Binary upgrade with systemd In-Reply-To: <878FBB66-318B-458B-9792-90C2767518C6@soda.fm> References: <13581B52-CB54-41F0-B01A-0B160C024E00@soda.fm> <0C53A9B0-9050-4128-9B0E-094920A12D75@lucasrolff.com> <878FBB66-318B-458B-9792-90C2767518C6@soda.fm> Message-ID: It seems that it?s working as designed. I thought the old master would exit automatically. But it sticks around in case you want to fail back. Thanks and sorry for the noise. > On Apr 4, 2017, at 7:41 AM, Marc Soda wrote: > > I sent WINCH to the old master. In this case 32277. > > After sending WINCH, I can send QUIT to the old master and it exits. Everything looks fine at that point. But it seems a little odd to have to do this. > >> On Apr 4, 2017, at 4:43 AM, Lucas Rolff > wrote: >> >> Hello Marc, >> >> For which PID do you send the WINCH signal? >> >> >> From: nginx > on behalf of Marc Soda > >> Reply-To: "nginx at nginx.org " > >> Date: Tuesday, 4 April 2017 at 04.04 >> To: "nginx at nginx.org " > >> Subject: Binary upgrade with systemd >> >> Hello, >> >> I?m using nginx 1.10.3 custom built on Ubuntu 16.04. I?m also using the recommended systemd service file: >> >> [Unit] >> Description=The NGINX HTTP and reverse proxy server >> After=syslog.target network.target remote-fs.target nss-lookup.target >> >> [Service] >> Type=forking >> PIDFile=/run/nginx.pid >> ExecStartPre=/usr/sbin/nginx -t >> ExecStart=/usr/sbin/nginx >> ExecReload=/bin/kill -s HUP $MAINPID >> ExecStop=/bin/kill -s QUIT $MAINPID >> PrivateTmp=true >> >> [Install] >> WantedBy=multi-user.target >> >> I?m try to do a no downtime upgrade with the USR2 and WINCH signals. Here is my process list before: >> >> root 32277 0.0 0.4 1056672 71148 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx >> www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process >> >> and here it is after sending USR2: >> >> root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx >> www 32278 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32279 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32280 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32281 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32282 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32283 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32288 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32289 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32290 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32291 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32292 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32293 0.0 0.4 1057924 73152 ? S< 21:51 0:00 \_ nginx: worker process >> www 32294 0.0 0.4 1056672 72212 ? S 21:51 0:00 \_ nginx: cache manager process >> root 32461 5.5 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx >> www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process >> www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process >> >> Notice how the new master is a child of the old master. If I send a WINCH I get: >> >> root 32277 0.0 0.4 1056672 71868 ? Ss 21:51 0:00 nginx: master process /usr/local/nginx/sbin/nginx >> root 32461 0.2 0.5 1056676 82316 ? S 22:01 0:00 \_ nginx: master process /usr/local/nginx/sbin/nginx >> www 32465 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32466 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32467 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32468 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32469 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32470 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32471 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32472 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32473 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32474 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32475 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32476 0.0 0.4 1057928 73052 ? S< 22:01 0:00 \_ nginx: worker process >> www 32477 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache manager process >> www 32478 0.0 0.4 1056676 72176 ? S 22:01 0:00 \_ nginx: cache loader process >> >> which is not what I?m looking for. Is this a limitation when running with systemd? >> >> Thanks, >> Marc >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 4 13:23:50 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Apr 2017 16:23:50 +0300 Subject: Nginx map module regex in file In-Reply-To: References: Message-ID: <20170404132349.GB13617@mdounin.ru> Hello! On Tue, Apr 04, 2017 at 01:01:19PM +0530, Jagannath Naidu wrote: > I am trying to redirect some urls to a different document path. My > configuration file is as follows > > > ############ /etc/nginx/conf.d/site.conf ############################ > *map_hash_max_size 2048;* > *map_hash_bucket_size 128;* > *map $uri $new {* > * include list_4;* > *}* > resolver 127.0.0.1; > server { > listen 81; > server_name abcexample.com; > access_log /var/log/nginx/abcexample-access.log main; > error_log /var/log/nginx/abcexample-error.log; > location / { > * if ($new) {* > * rewrite ^ $new redirect;* > * }* > proxy_pass http://127.0.0.1:8000; > } > > ################# /etc/nginx/list_4 ############################## > /abc/1.html /abc/hello; > /max/1.html /max/; > ~^/xyz/(?.*)$ /xyz/123; > *~^/kkkk/abcdef(?.*)$ /tttt/bbbbb/jjjj$abc;* > > *~^/kaka/(?.*)$ /tata/$abc;* > > > Note: > line 1,2 and 3 redirects are working fine. > But line 4 and 5 are not working. > > > *root at Hell1:~# curl -I abcexample.com/kkkk/abcef111.html > * > HTTP/1.1 302 Moved Temporarily > Server: nginx > Date: Tue, 04 Apr 2017 07:08:47 GMT > Content-Type: text/html > Content-Length: 154 > *Location: http://abcdexample.com/tttt/bbbbb/jjjj$abc > * > Connection: keep-alive > > My Question is: > What changes do I have to do in list_4 file to get results as follows > *Location: http://abcdexample.com/news/bbbbb/jjjj111.html > * Check your nginx version. You are trying to use a combination of text and variables as a resulting value of the map. This is supported only in nginx 1.11.0+, see http://nginx.org/r/map. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Apr 4 15:15:29 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Apr 2017 18:15:29 +0300 Subject: nginx-1.11.13 Message-ID: <20170404151529.GH13617@mdounin.ru> Changes with nginx 1.11.13 04 Apr 2017 *) Feature: the "http_429" parameter of the "proxy_next_upstream", "fastcgi_next_upstream", "scgi_next_upstream", and "uwsgi_next_upstream" directives. Thanks to Piotr Sikora. *) Bugfix: in memory allocation error handling. *) Bugfix: requests might hang when using the "sendfile" and "timer_resolution" directives on Linux. *) Bugfix: requests might hang when using the "sendfile" and "aio_write" directives with subrequests. *) Bugfix: in the ngx_http_v2_module. Thanks to Piotr Sikora. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2. *) Bugfix: requests might hang when using the "limit_rate", "sendfile_max_chunk", "limit_req" directives, or the $r->sleep() embedded perl method with subrequests. *) Bugfix: in the ngx_http_slice_module. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Apr 4 15:47:18 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 4 Apr 2017 11:47:18 -0400 Subject: [nginx-announce] nginx-1.11.13 In-Reply-To: <20170404151537.GI13617@mdounin.ru> References: <20170404151537.GI13617@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.13 for Windows https://kevinworthington.com/nginxwin11113 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 4, 2017 at 11:15 AM, Maxim Dounin wrote: > Changes with nginx 1.11.13 04 Apr > 2017 > > *) Feature: the "http_429" parameter of the "proxy_next_upstream", > "fastcgi_next_upstream", "scgi_next_upstream", and > "uwsgi_next_upstream" directives. > Thanks to Piotr Sikora. > > *) Bugfix: in memory allocation error handling. > > *) Bugfix: requests might hang when using the "sendfile" and > "timer_resolution" directives on Linux. > > *) Bugfix: requests might hang when using the "sendfile" and > "aio_write" > directives with subrequests. > > *) Bugfix: in the ngx_http_v2_module. > Thanks to Piotr Sikora. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2. > > *) Bugfix: requests might hang when using the "limit_rate", > "sendfile_max_chunk", "limit_req" directives, or the $r->sleep() > embedded perl method with subrequests. > > *) Bugfix: in the ngx_http_slice_module. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgorlo at gmail.com Tue Apr 4 17:22:58 2017 From: kgorlo at gmail.com (Kamil Gorlo) Date: Tue, 04 Apr 2017 17:22:58 +0000 Subject: Limit number of connections to server Message-ID: Hi, is there a way to limit total number of open connections per listening port in Nginx? I know that there is limit_conn module but as far as I understand it only works on "request" layer, which means connections are counted only when request headers have been already read. I have problem when number of SSL connections to my server is very high (CPU is 100% and server becomes unresponsive), and I would like to "cut" new connections after some defined threshold is exceeded. It would possibly save some CPU cycles needed to handle SSL handshake, etc. Is it possible? Regards, Kamil -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Tue Apr 4 19:15:00 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 4 Apr 2017 21:15:00 +0200 Subject: No referrer header on leacher's site !! In-Reply-To: References: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> Message-ID: With the controls sites have over the referrer header, it's not very effective as an access control mechanism. You can use something like http://nginx.org/en/docs/http/ngx_http_secure_link_module.html instead. On Tue, Apr 4, 2017 at 1:39 PM, shahzaib mushtaq wrote: > Hi, > > Thanks for quick response. Well its reverse, he's putting our HTTPS video > link on his HTTP website. Could that create issue as well? If yes, what's > the fix of it. > > Again thanks for your help. > > On Tue, Apr 4, 2017 at 4:32 PM, nanaya wrote: >> >> Hi, >> >> On Tue, Apr 4, 2017, at 20:24, shahzaib mushtaq wrote: >> > Hi, >> > >> > We came across a website who is playing our video links remotely. Since >> > we've hotlinking protection enabled based on referrer headers so i >> > checked >> > the request header by playing that video & found out that *referrer >> > header >> > was missing* in the browser's requests header tab. >> > >> >> If your site isn't https but his site is, some browsers by default don't >> send referrer header. There are also various other referrer policies >> with varying level of support: >> >> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy >> >> http://caniuse.com/#search=referrer%20policy >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Apr 4 20:54:08 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Apr 2017 21:54:08 +0100 Subject: No referrer header on leacher's site !! In-Reply-To: References: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> Message-ID: <20170404205408.GG3428@daoine.org> On Tue, Apr 04, 2017 at 04:39:23PM +0500, shahzaib mushtaq wrote: Hi there, > Thanks for quick response. Well its reverse, he's putting our HTTPS video > link on his HTTP website. Could that create issue as well? If yes, what's > the fix of it. nginx does not know (or care) what the linking site does. All it can see is the request made to it. The browser entirely controls what request headers the browser sends. If you want to deny all requests that have no Referer header, you can do that. If you want to deny only some requests that have no Referer header, you will need to tell nginx which requests to deny and which requests to allow. But before you can do that, you will have to know how to identify the requests in one of the sets. f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue Apr 4 20:58:16 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 04 Apr 2017 23:58:16 +0300 Subject: Limit number of connections to server In-Reply-To: References: Message-ID: <3867014.Xybq91EcAu@vbart-workstation> On Tuesday 04 April 2017 17:22:58 Kamil Gorlo wrote: > Hi, > > is there a way to limit total number of open connections per listening port > in Nginx? I know that there is limit_conn module but as far as I understand > it only works on "request" layer, which means connections are counted only > when request headers have been already read. > > I have problem when number of SSL connections to my server is very high > (CPU is 100% and server becomes unresponsive), and I would like to "cut" > new connections after some defined threshold is exceeded. It would possibly > save some CPU cycles needed to handle SSL handshake, etc. > > Is it possible? > You should use system firewall. Most of *nix systems have one out of the box. wbr, Valentin V. Bartenev From francis at daoine.org Tue Apr 4 21:10:11 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Apr 2017 22:10:11 +0100 Subject: Nginx redirect preserving source hostname In-Reply-To: <73da404a-13ac-8123-f35d-3d52ad1d414b@estaxi.ru> References: <73da404a-13ac-8123-f35d-3d52ad1d414b@estaxi.ru> Message-ID: <20170404211011.GH3428@daoine.org> On Mon, Apr 03, 2017 at 02:17:19PM +0600, ????????? ?????? wrote: Hi there, > I have an NGINX as reverse proxy with PHP-fpm. Nginx is set up for > serving www.somehost.com. I added another host www.anotherhost.com. > Now I need to setup redirect in this way: If user type > www.anotherhost.com then it redirects to www.somehost.com/someurl, > but url in browser bar shouldn't change. > Is it possible to redirect preserving url ? A "redirect" is an "external rewrite", which asks the browser to make a new request, and therefore change the url the browser shows. If you want the browser not to make a new request, you need to handle the request internally, within nginx, possibly by means of a proxy_pass (if the desired resource is only available in another server{}). Good luck with it, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Tue Apr 4 22:12:57 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 04 Apr 2017 15:12:57 -0700 Subject: Limit number of connections to server In-Reply-To: <3867014.Xybq91EcAu@vbart-workstation> References: <3867014.Xybq91EcAu@vbart-workstation> Message-ID: <20170404221257.5709911.91575.25470@lazygranch.com> You would probably want to also limit the number of connections per IP address, else one IP could lock up the entire site. ? Original Message ? From: Valentin V. Bartenev Sent: Tuesday, April 4, 2017 1:58 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Limit number of connections to server On Tuesday 04 April 2017 17:22:58 Kamil Gorlo wrote: > Hi, > > is there a way to limit total number of open connections per listening port > in Nginx? I know that there is limit_conn module but as far as I understand > it only works on "request" layer, which means connections are counted only > when request headers have been already read. > > I have problem when number of SSL connections to my server is very high > (CPU is 100% and server becomes unresponsive), and I would like to "cut" > new connections after some defined threshold is exceeded. It would possibly > save some CPU cycles needed to handle SSL handshake, etc. > > Is it possible? > You should use system firewall. Most of *nix systems have one out of the box. wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From pchychi at gmail.com Wed Apr 5 01:55:19 2017 From: pchychi at gmail.com (Payam Chychi) Date: Wed, 05 Apr 2017 01:55:19 +0000 Subject: Limit number of connections to server In-Reply-To: <20170404221257.5709911.91575.25470@lazygranch.com> References: <3867014.Xybq91EcAu@vbart-workstation> <20170404221257.5709911.91575.25470@lazygranch.com> Message-ID: You can also use ulimit but simple iptable/ipfw/pf will do the job On Tue, Apr 4, 2017 at 3:13 PM wrote: > You would probably want to also limit the number of connections per IP > address, else one IP could lock up the entire site. > > > Original Message > From: Valentin V. Bartenev > Sent: Tuesday, April 4, 2017 1:58 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: Limit number of connections to server > > On Tuesday 04 April 2017 17:22:58 Kamil Gorlo wrote: > > Hi, > > > > is there a way to limit total number of open connections per listening > port > > in Nginx? I know that there is limit_conn module but as far as I > understand > > it only works on "request" layer, which means connections are counted > only > > when request headers have been already read. > > > > I have problem when number of SSL connections to my server is very high > > (CPU is 100% and server becomes unresponsive), and I would like to "cut" > > new connections after some defined threshold is exceeded. It would > possibly > > save some CPU cycles needed to handle SSL handshake, etc. > > > > Is it possible? > > > > You should use system firewall. Most of *nix systems have one out of the > box. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Payam Tarverdyan Chychi Network Security Specialist / Network Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Apr 5 05:02:15 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Wed, 05 Apr 2017 01:02:15 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: <89360c715d40fb102ea16b8cd6b18340.NginxMailingListEnglish@forum.nginx.org> Uprgraded to last nginx version : memory still increase it seems Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273411#msg-273411 From nginx-forum at forum.nginx.org Wed Apr 5 06:27:04 2017 From: nginx-forum at forum.nginx.org (t.nishiyori) Date: Wed, 05 Apr 2017 02:27:04 -0400 Subject: slice module got error when contents of upstream was updated In-Reply-To: <20170403103652.GA97890@Romans-MacBook-Air.local> References: <20170403103652.GA97890@Romans-MacBook-Air.local> Message-ID: <214bd44475ec89f9d421bd37b1e74518.NginxMailingListEnglish@forum.nginx.org> Hi Roman, The slice module is awesome for caching my large content effectively. But my contents are updated in any times. I decide to implement some external tools for this error to hear your answer. Such like a monitoring error logs and purge unnecessary cache. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273307,273412#msg-273412 From sca at andreasschulze.de Wed Apr 5 09:16:27 2017 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 05 Apr 2017 11:16:27 +0200 Subject: minor manpage fix Message-ID: <20170405111627.Horde.tDdUpZNOQyul9urGq94w0dw@andreasschulze.de> hello by buildsystem warn about a minor glitch in nginx.8 patch attached Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: hyphen-used-as-minus-sign.patch Type: text/x-diff Size: 662 bytes Desc: not available URL: From shahzaib.cb at gmail.com Wed Apr 5 11:13:58 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 5 Apr 2017 16:13:58 +0500 Subject: Content mismatch random error !! Message-ID: Hi, Sometimes we encounter Content mismatch error in browser on website & refreshing the page fix this issue. The full error is : http://prntscr.com/esp3jr Here is nginx.conf file : https://pastebin.com/VF5L1xXy Some people on different forums saying its due to proxy cache but we're not using any kind of cache except for File-Descriptors cache config in nginx. The setup is built on pure NGINX+Php-fpm. Thanks for the help in advance !! Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Apr 5 11:32:01 2017 From: nginx-forum at forum.nginx.org (IgorR) Date: Wed, 05 Apr 2017 07:32:01 -0400 Subject: proxy_cache_background_update after cache expiry Message-ID: Hello, I'm trying to configure nginx to use proxy_cache_background_update but it seems like after expiry it still waits for the full roundtrip to the backend, returning a MISS in X-Cache-Status. What am I MISSing? I'm using nginx 1.11.12 under ubuntu 14.04 running inside docker, but hopefully this is too much detail. location ~ ^/?(\d+/[^/]+)?/?$ { expires 20s; proxy_cache app_cache; proxy_cache_lock on; proxy_cache_bypass $http_upgrade; proxy_pass http://172.17.0.2:5000; proxy_http_version 1.1; error_log /nginxerror.log debug; add_header X-Cache-Status $upstream_cache_status; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; break; } NB: This is a duplicate of my so question, I was kindly advised on the nginx IRC to repost here for better chances, the original question is here: http://stackoverflow.com/questions/43223993/nginx-proxy-cache-background-update-after-cache-expiry Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273417,273417#msg-273417 From ru at nginx.com Wed Apr 5 12:14:12 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 5 Apr 2017 15:14:12 +0300 Subject: minor manpage fix In-Reply-To: <20170405111627.Horde.tDdUpZNOQyul9urGq94w0dw@andreasschulze.de> References: <20170405111627.Horde.tDdUpZNOQyul9urGq94w0dw@andreasschulze.de> Message-ID: <20170405121412.GA15734@lo0.su> On Wed, Apr 05, 2017 at 11:16:27AM +0200, A. Schulze wrote: > hello > > by buildsystem warn about a minor glitch in nginx.8 > patch attached "\-" is a "minus sign in the current font", while "-" is a dash character, which is the correct character for the utility arguments. Applying this patch and generating a PostScript output would generate an example output than cannot be pasted back to the shell. > Description: fix minor manpage errors > Author: A. Schulze > --- > This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ > Index: nginx-1.11.13/man/nginx.8 > =================================================================== > --- nginx-1.11.13.orig/man/nginx.8 > +++ nginx-1.11.13/man/nginx.8 > @@ -180,8 +180,8 @@ Test configuration file > .Pa ~/mynginx.conf > with global directives for PID and quantity of worker processes: > .Bd -literal -offset indent > -nginx -t -c ~/mynginx.conf \e > - -g "pid /var/run/mynginx.pid; worker_processes 2;" > +nginx \-t \-c ~/mynginx.conf \e > + \-g "pid /var/run/mynginx.pid; worker_processes 2;" > .Ed > .Sh SEE ALSO > .\"Xr nginx.conf 5 From hemelaar at desikkel.nl Wed Apr 5 14:05:42 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Wed, 5 Apr 2017 16:05:42 +0200 Subject: proxy_cache_background_update after cache expiry In-Reply-To: References: Message-ID: Hi, I have a similar issue: http://mailman.nginx.org/pipermail/nginx/2017-March/053198.html I noticed (using tcpdump) that all data except the last package is send immediately. Can you verify it that's happening in your case as well? JP On Wed, Apr 5, 2017 at 1:32 PM, IgorR wrote: > Hello, > > I'm trying to configure nginx to use proxy_cache_background_update but it > seems like after expiry it still waits for the full roundtrip to the > backend, returning a MISS in X-Cache-Status. What am I MISSing? > > I'm using nginx 1.11.12 under ubuntu 14.04 running inside docker, but > hopefully this is too much detail. > > location ~ ^/?(\d+/[^/]+)?/?$ > { > expires 20s; > > proxy_cache app_cache; > proxy_cache_lock on; > > proxy_cache_bypass $http_upgrade; > > proxy_pass http://172.17.0.2:5000; > proxy_http_version 1.1; > error_log /nginxerror.log debug; > > add_header X-Cache-Status $upstream_cache_status; > > proxy_cache_use_stale error timeout updating http_500 http_502 http_503 > http_504; > proxy_cache_background_update on; > > break; > } > > NB: This is a duplicate of my so question, I was kindly advised on the > nginx > IRC to repost here for better chances, the original question is here: > http://stackoverflow.com/questions/43223993/nginx- > proxy-cache-background-update-after-cache-expiry > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,273417,273417#msg-273417 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 5 14:30:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Apr 2017 17:30:03 +0300 Subject: Memory issue In-Reply-To: <89360c715d40fb102ea16b8cd6b18340.NginxMailingListEnglish@forum.nginx.org> References: <89360c715d40fb102ea16b8cd6b18340.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170405143003.GO13617@mdounin.ru> Hello! On Wed, Apr 05, 2017 at 01:02:15AM -0400, JohnCarne wrote: > Uprgraded to last nginx version : > memory still increase it seems You may have better luck describing your issue: what you do, what you see as a result, and why you think this is an issue. Posting messages saying "I still have an issue" is unlikely to help. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 5 14:52:29 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Apr 2017 17:52:29 +0300 Subject: proxy_cache_background_update after cache expiry In-Reply-To: References: Message-ID: <20170405145228.GP13617@mdounin.ru> Hello! On Wed, Apr 05, 2017 at 07:32:01AM -0400, IgorR wrote: > Hello, > > I'm trying to configure nginx to use proxy_cache_background_update but it > seems like after expiry it still waits for the full roundtrip to the > backend, returning a MISS in X-Cache-Status. What am I MISSing? > > I'm using nginx 1.11.12 under ubuntu 14.04 running inside docker, but > hopefully this is too much detail. > > location ~ ^/?(\d+/[^/]+)?/?$ > { > expires 20s; > > proxy_cache app_cache; > proxy_cache_lock on; > > proxy_cache_bypass $http_upgrade; > > proxy_pass http://172.17.0.2:5000; > proxy_http_version 1.1; > error_log /nginxerror.log debug; > > add_header X-Cache-Status $upstream_cache_status; > > proxy_cache_use_stale error timeout updating http_500 http_502 http_503 > http_504; > proxy_cache_background_update on; > > break; > } "MISS" in $upstream_cache_status indicate that the relevant resource is not present in the cache, and therefore there is no stale response to return. Most likely the problem is the resource was never saved to the cache, because of the cacheability flags in the response. Consider testing if the resource is at all saved to cache, the proxy_cache_background_update is certainly irrelevant. Note that cacheability of a resource is determinded by nginx based on multiple factors. Cache-Control, Expires, X-Accel-Expires, Set-Cookie, and Vary response headers affect caching. If there are no response headers which explicitly allow caching and set expiration time, only responses with appropriate proxy_cache_valid are cached. For more information see http://nginx.org/r/proxy_cache_valid. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Apr 5 22:14:21 2017 From: nginx-forum at forum.nginx.org (phwaap) Date: Wed, 05 Apr 2017 18:14:21 -0400 Subject: Conditional expires in location Message-ID: Hi everyone, I have a generic location that servers many others via rewrite/proxy_pass. The generic location calls expires by including a file which disables caching. I have new location with its own expiration logic that needs to bypass this. What is the best way to do this? # Generic service. location = /generic.js { fastcgi_pass 'unix:/tmp/generic.socket'; include fastcgi.conf; # only does: expires -1s; include expires.conf; } location = /new.js { set $new_upstream https://new.lskjdflsj.com/; rewrite ^/new.js /generic.js?type=new; proxy_pass $new_upstream; } I've read that if statements are undesirable in location although that seems to work. I've also tried setting an expires variable via a map which also works. But doesn't this execute for every request? Zach Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273434,273434#msg-273434 From nginx-forum at forum.nginx.org Thu Apr 6 01:32:41 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Wed, 05 Apr 2017 21:32:41 -0400 Subject: Memory issue In-Reply-To: <20170405143003.GO13617@mdounin.ru> References: <20170405143003.GO13617@mdounin.ru> Message-ID: We described it properly when opening ticket, I reformulate : Usually, 1 nginx worker process consumes 1.16-2% of RAM maximum on this server, and it remain stable. For some days after nginx upgrades, every overnight, during daily stat generation process of cpanel which happens on overnight like set, there is many nginx reloads due to stat generation (= normal), but this is now causing an ever increasing memory use of RAM by nginx worker process, usually it stays around 1-2% RAM, we now see it cumulating after stat generation process increasing itself at begin with 1-2% RAM each time, which will lead after some weaks to a saturated server in term of RAM if nginx is not started. When we saw the issue first time, Nginx was consuming 12% of server RAM considering we have 128 GB RAM on this shared hosting server. After recent nginx upgrade : The increase is around 0.20% daily, instead of 1-2% RAM Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273436#msg-273436 From nginx-forum at forum.nginx.org Thu Apr 6 05:14:54 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 01:14:54 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: <0788a57b9fe5437be0165502235d25d6.NginxMailingListEnglish@forum.nginx.org> It looks like i don't speak english properly to be understood, others will open a thread on this issue, and may be explain better Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273437#msg-273437 From schwaderer at daz-services.de Thu Apr 6 05:59:07 2017 From: schwaderer at daz-services.de (Christian Schwaderer) Date: Thu, 6 Apr 2017 07:59:07 +0200 Subject: Websocket security Message-ID: Dear all, I ran NodeJS as a kind of Webapplication Server serving an AngularJS frontend. They communicate solely over WebSockets, using the SailsJS implementation of Socket.IO. Between frontend (client) and the NodeJS backend, sits nginx as a proxy, configured like so: |server { listen 1337 ssl; location /socket.io/ { proxy_pass https://localhost:1338; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } | So far, so good. I now want to monitor and secure the Websocket connection. In particular, I want to prevent XSS attacks and exclude IPs trying to brute force the login to my application. I'm pretty new to that stuff, but I've found out that there are tools working together with nginx which can fulfill my needs here. (In particular, fail2ban and nginx-naxsi) However, I did not find out till now, whether and how these tools would work with my design (proxied websocket). fail2ban works on log files. Right now, nginx does *not* log the websocket traffic. Is it possible to configure nginx so that it logs the proxied websocket traffic? I mean, the actual traffic, not the establishing of the socket connection, but what is actually being exchanged between client (browser) and server (NodeJS). That should appear in some nginx log file in order to make fail2ban work. Same goes for nginx-naxsi, I guess. Does nginx, in my configuration, even care about what browser and NodeJS are exchanging via websocket? How can I make nginx inspect the content of the websocket connection so that I can filter out malicious requests based on nginx-naxsi rules? Thanks in advance for any hints! Best, Christian (PS: Already had asked a similar question on serverfault, but not no avail.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 6 07:25:37 2017 From: nginx-forum at forum.nginx.org (IgorR) Date: Thu, 06 Apr 2017 03:25:37 -0400 Subject: proxy_cache_background_update after cache expiry In-Reply-To: References: Message-ID: thank you Maxim, the resourse was saved to cache but was quickly expiring in the browser causing a Cache-Control: no-cache header to be sent. Sending something like Cache-Control: public,max-age=15,s-maxage=240;must-revalidate,stale-while-revalidate=240 together with Last-Modified/ETag from the upstream server seem to be getting me where I want. Thanks for pointing me in the right direction. @jeanpaul: I didn't have the chance to check your idea since taking a closer look at the headers really helped moving forward. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273417,273439#msg-273439 From nginx-forum at forum.nginx.org Thu Apr 6 07:40:56 2017 From: nginx-forum at forum.nginx.org (mex) Date: Thu, 06 Apr 2017 03:40:56 -0400 Subject: Websocket security In-Reply-To: References: Message-ID: <135f4402b4636408fd580adc36f41df4.NginxMailingListEnglish@forum.nginx.org> Hello christian, naxsi-contributor first bad news first: naxsi wouldnt work on websockets. Any other security for websockets you have to implement yourself. list of usefull reads: - https://devcenter.heroku.com/articles/websocket-security - https://security.stackexchange.com/questions/48378/anti-dos-websockets-best-practices/ - https://gist.github.com/subudeepak/9897212 - https://kaazing.com/2012/02/28/html5-websocket-security-is-strong/ regards, mex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273438,273440#msg-273440 From shahzaib.cb at gmail.com Thu Apr 6 07:41:33 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 6 Apr 2017 12:41:33 +0500 Subject: Content mismatch random error !! In-Reply-To: References: Message-ID: Hi, Anyone ? Regards. On Wed, Apr 5, 2017 at 4:13 PM, shahzaib mushtaq wrote: > Hi, > > Sometimes we encounter Content mismatch error in browser on website & > refreshing the page fix this issue. The full error is : > http://prntscr.com/esp3jr > > Here is nginx.conf file : https://pastebin.com/VF5L1xXy > > Some people on different forums saying its due to proxy cache but we're > not using any kind of cache except for File-Descriptors cache config in > nginx. The setup is built on pure NGINX+Php-fpm. > > Thanks for the help in advance !! > > Shahzaib > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Apr 6 07:50:01 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 6 Apr 2017 12:50:01 +0500 Subject: No referrer header on leacher's site !! In-Reply-To: <20170404205408.GG3428@daoine.org> References: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> <20170404205408.GG3428@daoine.org> Message-ID: >>With the controls sites have over the referrer header, it's not very effective as an access control mechanism. You can use something like http://nginx.org/en/docs/http/ngx_http_secure_link_module.html instead. We're also using Nginx secure link module based on HASH + expiry but somehow this secure link is exploited by that website. The video link hash on his website is exactly matching with ours means no matter if hash get expire & new takes it place that leacher is also getting the new hash & we're unable to find how he exploited us. Though on digging more into this we found that he's using following script to fetch video links from our website : https://github.com/XvBMC/repository.xvbmc/blob/master/plugin.video.saltsrd.lite/scrapers/dizibox_scraper.py His website name is also dizibox1. On Wed, Apr 5, 2017 at 1:54 AM, Francis Daly wrote: > On Tue, Apr 04, 2017 at 04:39:23PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > Thanks for quick response. Well its reverse, he's putting our HTTPS video > > link on his HTTP website. Could that create issue as well? If yes, what's > > the fix of it. > > nginx does not know (or care) what the linking site does. All it can > see is the request made to it. > > The browser entirely controls what request headers the browser sends. > > If you want to deny all requests that have no Referer header, you can > do that. > > If you want to deny only some requests that have no Referer header, > you will need to tell nginx which requests to deny and which requests to > allow. But before you can do that, you will have to know how to identify > the requests in one of the sets. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Thu Apr 6 09:59:18 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 06 Apr 2017 11:59:18 +0200 Subject: Memory issue In-Reply-To: <0788a57b9fe5437be0165502235d25d6.NginxMailingListEnglish@forum.nginx.org> References: <0788a57b9fe5437be0165502235d25d6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <56f50c96672c622ef464f6ffb3970cd0@none.at> Am 06-04-2017 07:14, schrieb JohnCarne: > It looks like i don't speak english properly to be understood, others > will > open a thread on this issue, and may be explain better Well how about to remove the additionally modules and watch if the memory issue still exists. ### --add-dynamic-module=ngx_pagespeed-release-1.11.33.4-beta --add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.1/gems/passenger-5.1.2/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ModSecurity-nginx ### Maybe some of this modules are the reasons for the memory issue. Regards Aleks From nginx-forum at forum.nginx.org Thu Apr 6 10:01:43 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 06:01:43 -0400 Subject: Memory issue In-Reply-To: <56f50c96672c622ef464f6ffb3970cd0@none.at> References: <56f50c96672c622ef464f6ffb3970cd0@none.at> Message-ID: I let dev Anoop answer to you... he has a clue about the issue : https://github.com/SpiderLabs/ModSecurity-nginx/issues/45 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273444#msg-273444 From anoopalias01 at gmail.com Thu Apr 6 10:12:42 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 6 Apr 2017 15:42:42 +0530 Subject: Memory issue In-Reply-To: References: <56f50c96672c622ef464f6ffb3970cd0@none.at> Message-ID: If a module is dynamic loadable has issue and if we do not load the module , will it still cause the error ? In the case above , ModSecurity-nginx was compiled as a dynamic module and not loaded . On Thu, Apr 6, 2017 at 3:31 PM, JohnCarne wrote: > I let dev Anoop answer to you... he has a clue about the issue : > > https://github.com/SpiderLabs/ModSecurity-nginx/issues/45 > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,273274,273444#msg-273444 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 6 10:45:23 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Thu, 06 Apr 2017 06:45:23 -0400 Subject: Checking multiple caches before forwarding request to upstream Message-ID: <71665a76fa9ffeb54de53e99e5d51032.NginxMailingListEnglish@forum.nginx.org> Hi, We want to define multiple caches based on certain request headers (time stamp) so that we can put files modified in last 10 days on SSDs, last 30 days on HDDs and so on. I understand that we could use map feature to pick a cache dynamically which is good and works for us. But when serving a file, we want to check in all the caches because file modified in last 11 days could still be on SSDs, so we don't wan't to unnecessarily pull it out from backend and put it on HDDs. so net, net we want to check multiple caches before pulling from the backend. Is there a way to do that? We can only define one proxy_cache in a location block, so I figured if we some how use try files or error page attribute to failover multiple blocks and check one cache in each block, we could check multiple caches. following is my simplified config: server { listen 7800 ; server_name localhost; resolver 8.8.8.8 ipv6=off; location / { set $served_by "cache"; set $cache_key $request_uri; proxy_cache cache_recent; proxy_cache_key $cache_key; error_page 418 = @go-to-cloud-storage; return 418; #try_files /does-not-exist @go-to-cloud-storage; } location @go-to-cloud-storage { set $served_by "cloud"; proxy_cache cache_recent; proxy_cache_key $cache_key; proxy_set_header Host $host; proxy_pass https://$http_host$request_uri; } } But in the above config, @go-to-cloud-storage storage is always executed even when object is in cache for the / block. Thanks Sachin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273446,273446#msg-273446 From nginx-forum at forum.nginx.org Thu Apr 6 11:33:03 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 06 Apr 2017 07:33:03 -0400 Subject: No referrer header on leacher's site !! In-Reply-To: References: Message-ID: <908cc7ff647d594dba016bdc8c491f26.NginxMailingListEnglish@forum.nginx.org> Hello There, I had this same issue and fixed it by the following method. For example in HTML : That is what your media stream link would look like. But if you use JavaScript like the following example : And you insert your stream link into the page using JavaScript it unlocks the ability to make it hard for their python script to scrape/hotlink/content leech of your web pages. You can obfuscate JavaScript you can change the var names you can make it incredibly dynamic and difficult breaking their apps completely the more dynamic it is the harder and harder it is for them to obtain your stream links. Also you should blocked the following two user agents that those apps use. Kodi XBMC (I would suggest making them non case sensitive matches too) Where I posted in regards to this. https://forum.nginx.org/read.php?2,270705,270739#msg-270739 https://github.com/C0nw0nk/Nginx-Lua-Secure-Link-Anti-Hotlinking Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273405,273447#msg-273447 From amnesia.victim at gmail.com Thu Apr 6 11:50:18 2017 From: amnesia.victim at gmail.com (Dmitry S. Polyakov) Date: Thu, 06 Apr 2017 11:50:18 +0000 Subject: No referrer header on leacher's site !! In-Reply-To: References: <1491305526.712318.933610512.31DF5DC8@webmail.messagingengine.com> <20170404205408.GG3428@daoine.org> Message-ID: On Thu, Apr 6, 2017, 10:50 shahzaib mushtaq wrote: > >>With the controls sites have over the referrer header, it's not very > effective as an access control mechanism. You can use something like > http://nginx.org/en/docs/http/ngx_http_secure_link_module.html > instead. > > We're also using Nginx secure link module based on HASH + expiry but > somehow this secure link is exploited by that website. The video link hash > on his website is exactly matching with ours means no matter if hash get > expire & new takes it place that leacher is also getting the new hash & > we're unable to find how he exploited us. Though on digging more into this > we found that he's using following script to fetch video links from our > website : > > > https://github.com/XvBMC/repository.xvbmc/blob/master/plugin.video.saltsrd.lite/scrapers/dizibox_scraper.py > > His website name is also dizibox1. > IT happens because your secure links hash doesn't have any end user unique attributes like ip address If you'll include enduser ip to the secure link hash, secure link become unique for the end user. Any direct video link grabbed and shared by the enduser or some script become useless. > > On Wed, Apr 5, 2017 at 1:54 AM, Francis Daly wrote: > > On Tue, Apr 04, 2017 at 04:39:23PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > Thanks for quick response. Well its reverse, he's putting our HTTPS video > > link on his HTTP website. Could that create issue as well? If yes, what's > > the fix of it. > > nginx does not know (or care) what the linking site does. All it can > see is the request made to it. > > The browser entirely controls what request headers the browser sends. > > If you want to deny all requests that have no Referer header, you can > do that. > > If you want to deny only some requests that have no Referer header, > you will need to tell nginx which requests to deny and which requests to > allow. But before you can do that, you will have to know how to identify > the requests in one of the sets. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 6 12:03:19 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 06 Apr 2017 08:03:19 -0400 Subject: No referrer header on leacher's site !! In-Reply-To: References: Message-ID: <68d306e00b322c80fda3a225fbf35fc7.NginxMailingListEnglish@forum.nginx.org> Dmitry S. Polyakov Wrote: ------------------------------------------------------- > On Thu, Apr 6, 2017, 10:50 shahzaib mushtaq > wrote: > > > >>With the controls sites have over the referrer header, it's not > very > > effective as an access control mechanism. You can use something like > > http://nginx.org/en/docs/http/ngx_http_secure_link_module.html > > instead. > > > > We're also using Nginx secure link module based on HASH + expiry but > > somehow this secure link is exploited by that website. The video > link hash > > on his website is exactly matching with ours means no matter if hash > get > > expire & new takes it place that leacher is also getting the new > hash & > > we're unable to find how he exploited us. Though on digging more > into this > > we found that he's using following script to fetch video links from > our > > website : > > > > > > > https://github.com/XvBMC/repository.xvbmc/blob/master/plugin.video.sal > tsrd.lite/scrapers/dizibox_scraper.py > > > > His website name is also dizibox1. > > > IT happens because your secure links hash doesn't have any end user > unique > attributes like ip address > If you'll include enduser ip to the secure link hash, secure link > become > unique for the end user. Any direct video link grabbed and shared by > the > enduser or some script become useless. You would think that but with Kodi/XBMC that is not the case their App grabs and sends a HTML request on a per user basis. So each and every request comes from a users Kodi box or app on their phone etc what when the page generates the HTML response to that user it also generated the response for their IP address. It is like real web traffic. I prevented them as I explained here https://forum.nginx.org/read.php?2,273405,273447#msg-273447 Also if you browse and view pornhub, pornsocket, youtube what ever streaming sites etc you will see they now hide and obfuscate their stream links in JavaScript to break these kodi box users as I explained in the link above. Here is proof :
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273405,273449#msg-273449 From mdounin at mdounin.ru Thu Apr 6 14:53:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Apr 2017 17:53:57 +0300 Subject: Checking multiple caches before forwarding request to upstream In-Reply-To: <71665a76fa9ffeb54de53e99e5d51032.NginxMailingListEnglish@forum.nginx.org> References: <71665a76fa9ffeb54de53e99e5d51032.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170406145357.GT13617@mdounin.ru> Hello! On Thu, Apr 06, 2017 at 06:45:23AM -0400, sachin.shetty at gmail.com wrote: > Hi, > > We want to define multiple caches based on certain request headers (time > stamp) so that we can put files modified in last 10 days on SSDs, last 30 > days on HDDs and so on. I understand that we could use map feature to pick a > cache dynamically which is good and works for us. > > But when serving a file, we want to check in all the caches because file > modified in last 11 days could still be on SSDs, so we don't wan't to > unnecessarily pull it out from backend and put it on HDDs. > > so net, net we want to check multiple caches before pulling from the > backend. Is there a way to do that? We can only define one proxy_cache in a > location block, so I figured if we some how use try files or error page > attribute to failover multiple blocks and check one cache in each block, we > could check multiple caches. > > following is my simplified config: > > server { > listen 7800 ; > server_name localhost; > resolver 8.8.8.8 ipv6=off; > > location / { > > set $served_by "cache"; > set $cache_key $request_uri; > proxy_cache cache_recent; > proxy_cache_key $cache_key; > > error_page 418 = @go-to-cloud-storage; return 418; > #try_files /does-not-exist @go-to-cloud-storage; > > } > > location @go-to-cloud-storage { > set $served_by "cloud"; > > proxy_cache cache_recent; > proxy_cache_key $cache_key; > > proxy_set_header Host $host; > proxy_pass https://$http_host$request_uri; > } > > } > > But in the above config, @go-to-cloud-storage storage is always executed > even when object is in cache for the / block. That's because you unconditionally return 418 before any other processing. Rewrite directives, including "return", are executed when nginx searches for a configuration to use, see details here: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html Moreover, proxy_cache only works when you proxy somewhere, and there is no proxy_pass in your "location /". To check multiple caches consider actual proxying instead, this will allow you to naturally use multiple caching layers. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Apr 6 15:04:30 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Thu, 06 Apr 2017 11:04:30 -0400 Subject: Checking multiple caches before forwarding request to upstream In-Reply-To: <20170406145357.GT13617@mdounin.ru> References: <20170406145357.GT13617@mdounin.ru> Message-ID: <8ffb3bfb1376ff8ed6c38d84244921e8.NginxMailingListEnglish@forum.nginx.org> Thanks, I guess actual proxying is the only way out. I will try it out. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273446,273454#msg-273454 From mdounin at mdounin.ru Thu Apr 6 15:23:20 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Apr 2017 18:23:20 +0300 Subject: Memory issue In-Reply-To: References: <20170405143003.GO13617@mdounin.ru> Message-ID: <20170406152320.GU13617@mdounin.ru> Hello! On Wed, Apr 05, 2017 at 09:32:41PM -0400, JohnCarne wrote: > We described it properly when opening ticket, I reformulate : > > Usually, 1 nginx worker process consumes 1.16-2% of RAM maximum on this > server, and it remain stable. > For some days after nginx upgrades, every overnight, during daily stat > generation process of cpanel which happens on overnight like set, there is > many nginx reloads due to stat generation (= normal), but this is now > causing an ever increasing memory use of RAM by nginx worker process, > usually it stays around 1-2% RAM, we now see it cumulating after stat > generation process increasing itself at begin with 1-2% RAM each time, which > will lead after some weaks to a saturated server in term of RAM if nginx is > not started. When we saw the issue first time, Nginx was consuming 12% of > server RAM considering we have 128 GB RAM on this shared hosting server. > > After recent nginx upgrade : > The increase is around 0.20% daily, instead of 1-2% RAM So, you observe one nginx worker process consuming about 12% of your server RAM, that is, more than 10GB of memory, correct? You may want to provide something like "ps alx | grep nginx" output to illustrate the problem. You may start with the following basic steps: - Check your "nginx -V" output and nginx configuration; disable 3rd party modules if there are any, and check if the problem persists. In many cases various obscure problems are introduced by bugs in 3rd party modules. - Make sure you are talking about a single worker process memory consumption, and not overral memory consumption of all nginx worker processes. Multiple configuration reloads can leave multiple nginx worker processes in the "shutting down..." state for a long time which depends on the particular workload, and it is not a surprise you need memory if you do lots of configuration reloads. - Check your nginx configuration to see if there natural reasons to consume memory - multiple connections and large buffers configured, thousands of complex location configurtions, large shared memory zones, and so on. - Try to find out what exactly causes increased memory consumption. The "stats generation process" you write about is not something nginx does by itself, and it is completely unknown what it means for anyone except you. If the above won't be enough for you to identify the problem, consider providing additional information about the observed proble, including "ps alx" output which demonstrates the problem, "nginx -V" output, and full nginx configuration (shown with "nginx -T"). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Apr 6 15:32:45 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 11:32:45 -0400 Subject: Memory issue In-Reply-To: <20170406152320.GU13617@mdounin.ru> References: <20170406152320.GU13617@mdounin.ru> Message-ID: cpanel stat generation cause thet nginx makes a lot of reload to grab new file descriptor... no issue on that Issue is nginx, I show you situation now with 2.58% used for 1 work, which is same value for others, but gloably, nginx uses now 2.58%, this number is increasing slowly at the rythm of nginx reloads asked... [root at web1 ~]# ps alx | grep nginx 5 99 711213 913692 20 0 3917936 3397132 ep_pol S ? 0:22 nginx: worker process 5 99 711224 913692 20 0 3918128 3397300 ep_pol S ? 0:24 nginx: worker process 5 99 711229 913692 20 0 3918392 3397456 ep_pol S ? 0:26 nginx: worker process 5 99 711238 913692 20 0 3918128 3397228 ep_pol S ? 0:20 nginx: worker process 5 99 711245 913692 20 0 3917936 3397144 ep_pol S ? 0:23 nginx: worker process 5 99 711248 913692 20 0 3918096 3397296 ep_pol S ? 0:18 nginx: worker process 5 99 711252 913692 20 0 3918392 3397392 ep_pol S ? 0:21 nginx: worker process 5 99 711255 913692 20 0 3918128 3397132 - R ? 0:19 nginx: worker process 5 99 711257 913692 20 0 3917580 3394632 ep_pol S ? 0:00 nginx: cache manager process 0 0 767011 766950 20 0 112652 956 pipe_w S+ pts/2 0:00 grep --color=auto nginx 5 0 913692 1 20 0 3917576 3396176 sigsus Ss ? 74:26 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf Output of nginx -T is taking 100's of pages, i can't pu it here... [root at web1 ~]# nginx -V nginx version: nginx/1.11.13 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with LibreSSL 2.5.2 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.40 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.2 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --with-http_geoip_module=dynamic --add-dynamic-module=ngx_pagespeed-release-1.11.33.4-beta --add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.1/gems/passenger-5.1.2/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E [root at web1 ~]# Anoop is dev of Xtendweb stack using nginx core, he is investigating, and it seems his solution if to use a pre-approved nginx core : https://openresty.org/en/ and that it could solve many issues with 3rd party modules which we need to use absolutely... Indeed, we are all tired to do 3 upgrades /month Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273456#msg-273456 From vbart at nginx.com Thu Apr 6 15:37:40 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 06 Apr 2017 18:37:40 +0300 Subject: Memory issue In-Reply-To: References: <20170406152320.GU13617@mdounin.ru> Message-ID: <15539872.b5U4JGvMUM@vbart-workstation> On Thursday 06 April 2017 11:32:45 JohnCarne wrote: > cpanel stat generation cause thet nginx makes a lot of reload to grab new > file descriptor... no issue on that > [..] JFYI, reloading nginx isn't required to reopen log files. See for details: http://nginx.org/en/docs/control.html#logs wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Apr 6 15:39:23 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 11:39:23 -0400 Subject: Memory issue In-Reply-To: <15539872.b5U4JGvMUM@vbart-workstation> References: <15539872.b5U4JGvMUM@vbart-workstation> Message-ID: Thanks for info, Anoop will read this, he is subscribed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273458#msg-273458 From lucas at lucasrolff.com Thu Apr 6 16:36:05 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 6 Apr 2017 16:36:05 +0000 Subject: Memory issue In-Reply-To: References: <20170406152320.GU13617@mdounin.ru> Message-ID: <0D7F678E-2678-4ADE-A592-446962269F48@lucasrolff.com> > cpanel stat generation cause thet nginx makes a lot of reload to grab new file descriptor... no issue on that Even though this is off-topic - if you issue a lot of reloads during cPanel stat generation, your hooks are configured wrong, since Apache in cPanel only reloads *once* during the whole process. > Issue is nginx, I show you situation now with 2.58% used for 1 work, which is same value for others, but gloably, nginx uses now 2.58%, this number is increasing slowly at the rythm of nginx reloads asked.. I happen to use nginx (mainline version) myself on a cPanel server - no custom modules, there's no memory leak in the latest versions of nginx - I stay happily at 0.2% in memory (32 gigabyte server) > Anoop is dev of Xtendweb stack using nginx core, he is investigating, and it seems his solution if to use a pre-approved nginx core : https://openresty.org/en/ Using OpenResty wouldn't solve your issues - you can use the exact same modules as OpenResty does, in a normal nginx build (be aware that some modules such as the lua module, and the echo-nginx module isn't yet building correctly against nginx 1.11.11+) In the end, include only the extra modules you require - because as far as I can tell, in the mainline version of nginx, nothing (at least for me) has caused memory issues with workers, even when reloading a bunch of times. It might very well be that one of the 3rd-party modules have not been fully tested to work with 1.11.13 On 06/04/2017, 17.32, "nginx on behalf of JohnCarne" wrote: >cpanel stat generation cause thet nginx makes a lot of reload to grab new >file descriptor... no issue on that > >Issue is nginx, I show you situation now with 2.58% used for 1 work, which >is same value for others, but gloably, nginx uses now 2.58%, this number is >increasing slowly at the rythm of nginx reloads asked... > > >[root at web1 ~]# ps alx | grep nginx >5 99 711213 913692 20 0 3917936 3397132 ep_pol S ? 0:22 >nginx: worker process >5 99 711224 913692 20 0 3918128 3397300 ep_pol S ? 0:24 >nginx: worker process >5 99 711229 913692 20 0 3918392 3397456 ep_pol S ? 0:26 >nginx: worker process >5 99 711238 913692 20 0 3918128 3397228 ep_pol S ? 0:20 >nginx: worker process >5 99 711245 913692 20 0 3917936 3397144 ep_pol S ? 0:23 >nginx: worker process >5 99 711248 913692 20 0 3918096 3397296 ep_pol S ? 0:18 >nginx: worker process >5 99 711252 913692 20 0 3918392 3397392 ep_pol S ? 0:21 >nginx: worker process >5 99 711255 913692 20 0 3918128 3397132 - R ? 0:19 >nginx: worker process >5 99 711257 913692 20 0 3917580 3394632 ep_pol S ? 0:00 >nginx: cache manager process >0 0 767011 766950 20 0 112652 956 pipe_w S+ pts/2 0:00 >grep --color=auto nginx >5 0 913692 1 20 0 3917576 3396176 sigsus Ss ? 74:26 >nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf > > >Output of nginx -T is taking 100's of pages, i can't pu it here... > >[root at web1 ~]# nginx -V >nginx version: nginx/1.11.13 >built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) >built with LibreSSL 2.5.2 >TLS SNI support enabled >configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >--modules-path=/etc/nginx/modules --with-pcre=./pcre-8.40 --with-pcre-jit >--with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.2 >--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log >--http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid >--lock-path=/var/run/nginx.lock >--http-client-body-temp-path=/var/cache/nginx/client_temp >--http-proxy-temp-path=/var/cache/nginx/proxy_temp >--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody >--group=nobody --with-http_ssl_module --with-http_realip_module >--with-http_addition_module --with-http_sub_module --with-http_dav_module >--with-http_flv_module --with-http_mp4_module --with-http_gunzip_module >--with-http_gzip_static_module --with-http_random_index_module >--with-http_secure_link_module --with-http_stub_status_module >--with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src >--with-file-aio --with-threads --with-stream --with-stream_ssl_module >--with-http_slice_module --with-compat --with-http_v2_module >--with-http_geoip_module=dynamic >--add-dynamic-module=ngx_pagespeed-release-1.11.33.4-beta >--add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.1/gems/passenger-5.1.2/src/nginx_module >--add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 >--add-dynamic-module=headers-more-nginx-module-0.32 >--add-dynamic-module=ngx_http_redis-0.3.8 >--add-dynamic-module=redis2-nginx-module >--add-dynamic-module=srcache-nginx-module-0.31 >--add-dynamic-module=ngx_devel_kit-0.3.0 >--add-dynamic-module=set-misc-nginx-module-0.31 >--add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall >-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' >--with-ld-opt=-Wl,-E >[root at web1 ~]# > > >Anoop is dev of Xtendweb stack using nginx core, he is investigating, and it >seems his solution if to use a pre-approved nginx core : >https://openresty.org/en/ >and that it could solve many issues with 3rd party modules which we need to >use absolutely... > >Indeed, we are all tired to do 3 upgrades /month > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273456#msg-273456 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Apr 6 17:05:31 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Apr 2017 20:05:31 +0300 Subject: Memory issue In-Reply-To: References: <20170406152320.GU13617@mdounin.ru> Message-ID: <20170406170531.GA13617@mdounin.ru> Hello! On Thu, Apr 06, 2017 at 11:32:45AM -0400, JohnCarne wrote: > cpanel stat generation cause thet nginx makes a lot of reload to grab new > file descriptor... no issue on that > > Issue is nginx, I show you situation now with 2.58% used for 1 work, which > is same value for others, but gloably, nginx uses now 2.58%, this number is > increasing slowly at the rythm of nginx reloads asked... > > > [root at web1 ~]# ps alx | grep nginx > 5 99 711213 913692 20 0 3917936 3397132 ep_pol S ? 0:22 > nginx: worker process > 5 99 711224 913692 20 0 3918128 3397300 ep_pol S ? 0:24 > nginx: worker process > 5 99 711229 913692 20 0 3918392 3397456 ep_pol S ? 0:26 > nginx: worker process > 5 99 711238 913692 20 0 3918128 3397228 ep_pol S ? 0:20 > nginx: worker process > 5 99 711245 913692 20 0 3917936 3397144 ep_pol S ? 0:23 > nginx: worker process > 5 99 711248 913692 20 0 3918096 3397296 ep_pol S ? 0:18 > nginx: worker process > 5 99 711252 913692 20 0 3918392 3397392 ep_pol S ? 0:21 > nginx: worker process > 5 99 711255 913692 20 0 3918128 3397132 - R ? 0:19 > nginx: worker process > 5 99 711257 913692 20 0 3917580 3394632 ep_pol S ? 0:00 > nginx: cache manager process > 0 0 767011 766950 20 0 112652 956 pipe_w S+ pts/2 0:00 > grep --color=auto nginx > 5 0 913692 1 20 0 3917576 3396176 sigsus Ss ? 74:26 > nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf Ok, so nginx master process grown to 3+ GB, and all workers are mostly identical. So the memory is certainly consumed by the master process. This may or may not be normal, depending on the configuration used. If you think that memory is consumed on configuration reloads, you may want to provide "ps alx | grep nginx" after restarting nginx (it is expected to be small), after one configuration reload (it is expected to be larger due to another copy of the configuration allocated) and after multiple configuration reloads (this will demonstrate increased memory consumption, if any). Note well: leaking memory on configuration reloads is a more or less typical problem with various 3rd party modules. IIRC, various versions of modsecurity were previously reported doing this. As previously suggested, try without 3rd party modules to see if it helps. > Output of nginx -T is taking 100's of pages, i can't pu it here... Well, not a problem. It simply limits possibility of others to help you. Anyway, I suspect your problem is 3rd party modules, and I don't think there is a room to investigate further before testing if the problem persists without 3rd party modules. [...] > Indeed, we are all tired to do 3 upgrades /month Note that the way you are asking others to help you is not likely to attract many volunteers. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Apr 6 17:20:20 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 13:20:20 -0400 Subject: Memory issue In-Reply-To: <20170406170531.GA13617@mdounin.ru> References: <20170406170531.GA13617@mdounin.ru> Message-ID: <9d21bf97a209072d5842d385fe63d3d8.NginxMailingListEnglish@forum.nginx.org> Thanks for your feebacks. I do my best seriously to communicate what I can ! We note this issue on very busy server only, it will be hard to remove all modules on such busy server, and reconfig all, but not impossible. Anoop is on the case on smallest servers, and succeed to see the issue at small scale, he will troubleshoot with sometimes... Yes, 1 of the 3rd party module is leaking memory for sure, no doubt about this for me. ps alx | grep nginx" after restarting nginx This gives 1.16% as usual + 1 more reload, we get around 2.24% + more 100's more reloads, we gradually get to 2.58% Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273461#msg-273461 From nginx-forum at forum.nginx.org Thu Apr 6 18:46:40 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Thu, 06 Apr 2017 14:46:40 -0400 Subject: Nginx upstream server certificate verification In-Reply-To: References: Message-ID: Thank Sergey, for you response. I have one more question. If I have multiple upstream server host names in the upstream server block, then how can I specify the specific upstream server host name to which the request is being proxied, in the proxy_ssl_name directive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273295,273462#msg-273462 From reallfqq-nginx at yahoo.fr Thu Apr 6 19:49:03 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 6 Apr 2017 21:49:03 +0200 Subject: Memory issue In-Reply-To: <9d21bf97a209072d5842d385fe63d3d8.NginxMailingListEnglish@forum.nginx.org> References: <20170406170531.GA13617@mdounin.ru> <9d21bf97a209072d5842d385fe63d3d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Idea coming right out of the blue: have you given a thought on compiling nginx (+ gradually modules) with valgrind? ?You should know pretty quickly if something is wrong.? ?Note the slowdown, though. Might not be a good idea on production, or if you do not secure some offload to somewhere else if it becomes messy.? --- *B. R.* On Thu, Apr 6, 2017 at 7:20 PM, JohnCarne wrote: > Thanks for your feebacks. I do my best seriously to communicate what I can > ! > > We note this issue on very busy server only, it will be hard to remove all > modules on such busy server, and reconfig all, but not impossible. Anoop is > on the case on smallest servers, and succeed to see the issue at small > scale, he will troubleshoot with sometimes... > > Yes, 1 of the 3rd party module is leaking memory for sure, no doubt about > this for me. > > ps alx | grep nginx" after restarting nginx > This gives 1.16% as usual > + 1 more reload, we get around 2.24% > + more 100's more reloads, we gradually get to 2.58% > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,273274,273461#msg-273461 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ceecko at gmail.com Thu Apr 6 21:18:02 2017 From: ceecko at gmail.com (Michal Kralik) Date: Thu, 6 Apr 2017 23:18:02 +0200 Subject: 99.999% my config works - then default_server is used Message-ID: Hi, We're facing super strange issues with nginx 1.10.3 (CentOS 7, 3.10.0-327.36.3.el7.x86_64). Our config works ok, we get a lot of traffic, but every day a couple of requests (5-10) don't get processes correctly by a server directive with the defined server_name, but rather by a server directive with listen's "default_server". Here's the config https://paste.ngx.cc/29345124359e0447 Most of the time a request to 01.app.example.com gets processed correctly, but every day a couple of requests get processed by the server directive from file conf.d/01-default.conf. There's nothing in the nginx system logs around the time when this issue happens. The access.log from the default_server shows this (/data/logs/nginx/access.log): 1.1.1.1 - - [06/Apr/2017:12:20:57 +0000] "01.app.example.com" "GET /robots.txt HTTP/1.1" 302 161 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; + http://www.google.com/bot.html)" "-" It's clearly a redirect processed by the default_server, but the request should be rather processed by server directive in conf.d/02-apps.conf because the host header clearly states "01.app.example.com". Any ideas what's going on? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Apr 6 21:50:56 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 7 Apr 2017 00:50:56 +0300 Subject: 99.999% my config works - then default_server is used In-Reply-To: References: Message-ID: <20170406215056.GH79252@lo0.su> On Thu, Apr 06, 2017 at 11:18:02PM +0200, Michal Kralik wrote: > Hi, > > We're facing super strange issues with nginx 1.10.3 (CentOS 7, > 3.10.0-327.36.3.el7.x86_64). Our config works ok, we get a lot of traffic, > but every day a couple of requests (5-10) don't get processes correctly by > a server directive with the defined server_name, but rather by a server > directive with listen's "default_server". Here's the config > https://paste.ngx.cc/29345124359e0447 > > Most of the time a request to 01.app.example.com gets processed correctly, > but every day a couple of requests get processed by the server directive > from file conf.d/01-default.conf. There's nothing in the nginx system logs > around the time when this issue happens. > > The access.log from the default_server shows this > (/data/logs/nginx/access.log): > 1.1.1.1 - - [06/Apr/2017:12:20:57 +0000] "01.app.example.com" "GET > /robots.txt HTTP/1.1" 302 161 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; + > http://www.google.com/bot.html)" "-" > > It's clearly a redirect processed by the default_server, but the request > should be rather processed by server directive in conf.d/02-apps.conf > because the host header clearly states "01.app.example.com". > > Any ideas what's going on? Given the provided configuration snippet, this should happen with all HTTPS requests to 01.app.example.com. From al-nginx at none.at Thu Apr 6 22:51:15 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 07 Apr 2017 00:51:15 +0200 Subject: Memory issue In-Reply-To: <20170406170531.GA13617@mdounin.ru> References: <20170406152320.GU13617@mdounin.ru> <20170406170531.GA13617@mdounin.ru> Message-ID: <6789749d680998a41f82d644c1935d9c@none.at> Am 06-04-2017 19:05, schrieb Maxim Dounin: > Hello! > > On Thu, Apr 06, 2017 at 11:32:45AM -0400, JohnCarne wrote: > [...] > [...] > >> Indeed, we are all tired to do 3 upgrades /month > > Note that the way you are asking others to help you is not likely > to attract many volunteers. +1 From nginx-forum at forum.nginx.org Fri Apr 7 03:03:48 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 23:03:48 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: <8b79275036348545e9f14362f561d8d0.NginxMailingListEnglish@forum.nginx.org> @] BR via Nginx : Idea coming right out of the blue: have you given a thought on compiling nginx (+ gradually modules) with valgrind? ?You should know pretty quickly if something is wrong.? Thanks for this idea which can really improve the eng process... Brotli seems the main suspected issue Anoop just told me : **the reload is changed to USR1 for stats//// I will upgrade soon, this should dramatically reduce the issue to 0 nearly... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273467#msg-273467 From nginx-forum at forum.nginx.org Fri Apr 7 03:34:05 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 23:34:05 -0400 Subject: Memory issue In-Reply-To: <20170406170531.GA13617@mdounin.ru> References: <20170406170531.GA13617@mdounin.ru> Message-ID: <3fd2571a3c37398b2faed20bdc5d1062.NginxMailingListEnglish@forum.nginx.org> I could not paste output of nginx -T, even truncating to 2500 lines instead of 158000 I get : Please shorten your messages, the body is too large. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273468#msg-273468 From nginx-forum at forum.nginx.org Fri Apr 7 03:57:38 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Thu, 06 Apr 2017 23:57:38 -0400 Subject: Memory issue In-Reply-To: <3fd2571a3c37398b2faed20bdc5d1062.NginxMailingListEnglish@forum.nginx.org> References: <20170406170531.GA13617@mdounin.ru> <3fd2571a3c37398b2faed20bdc5d1062.NginxMailingListEnglish@forum.nginx.org> Message-ID: another attempt : # configuration file /etc/nginx/nginx.conf: #Core Functionality user nobody; worker_processes 8; pid /var/run/nginx.pid; pcre_jit on; error_log /var/log/nginx/error_log; #error_log /home/abackup/debug.log debug; worker_rlimit_nofile 300000; #Load Dynamic Modules include /etc/nginx/modules.d/*.load; events { worker_connections 8192; use epoll; multi_accept on; accept_mutex off; } #Settings For other core modules like for example the stream module include /etc/nginx/conf.d/main_custom_include.conf; #Settings for the http core module include /etc/nginx/conf.d/http_settings_custom.conf; # configuration file /etc/nginx/modules.d/brotli.load: load_module "/etc/nginx/modules/ngx_http_brotli_filter_module.so"; load_module "/etc/nginx/modules/ngx_http_brotli_static_module.so"; # configuration file /etc/nginx/modules.d/geoip.load: load_module "/etc/nginx/modules/ngx_http_geoip_module.so"; # configuration file /etc/nginx/modules.d/headers_more_filter.load: load_module "/etc/nginx/modules/ngx_http_headers_more_filter_module.so"; # configuration file /etc/nginx/modules.d/ndk.load: load_module "/etc/nginx/modules/ndk_http_module.so"; # configuration file /etc/nginx/conf.d/main_custom_include.conf: # configuration file /etc/nginx/conf.d/http_settings_custom.conf: http { #Set server identifier to XtendWeb-nginx more_set_headers 'Server: YOORshop'; sendfile off; sendfile_max_chunk 1M; tcp_nodelay on; #tcp_nopush on; # Slowloris mitigation client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 30s; send_timeout 20s; reset_timedout_connection on; keepalive_requests 512; keepalive_disable msie6 safari; types_hash_max_size 2048; server_names_hash_max_size 256000; server_names_hash_bucket_size 4096; server_tokens off; client_max_body_size 32m; client_body_buffer_size 256k; map_hash_bucket_size 256; map_hash_max_size 4096; connection_pool_size 512; client_header_buffer_size 32k; large_client_header_buffers 4 256k; request_pool_size 32k; output_buffers 4 256k; postpone_output 1460; #FastCGI fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; # the below options depend on theoretical maximum of your PHP script run-time fastcgi_read_timeout 300; fastcgi_send_timeout 300; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; # Open File Cache open_file_cache max=10000 inactive=5m; open_file_cache_valid 2m; open_file_cache_min_uses 2; open_file_cache_errors on; # Logging Settings open_log_file_cache max=1000 inactive=20s valid=1m min_uses=2; log_format bytes_log "$sec $bytes_sent ."; log_not_found off; access_log off; #Default maps include /etc/nginx/conf.d/maps.conf; include /etc/nginx/conf.d/maps-custom.conf; #Limit Request Zone conf include /etc/nginx/conf.d/limit_request_custom.conf; #Include File where you can add any custom settings include /etc/nginx/conf.d/custom_include.conf; #RealIP conf for CDN like CloudFlare,Incapsula etc include /etc/nginx/conf.d/cdn_realip.conf; real_ip_header X-Forwarded-For; # FastCGI and PROXY cache config include /etc/nginx/conf.d/nginx_cache_custom.conf; # Uncomment following to enable DOS mitigation server wide # include /etc/nginx/conf.d/dos_mitigate.conf; # Include All config files in /etc/nginx/conf.auto/ include /etc/nginx/conf.auto/*.conf; # Virtual Host Configs #include /etc/nginx/conf.d/default_server.conf; # Auto Generated at nDeploy install time #include /etc/nginx/sites-enabled/*.conf; # Auto Generated by nDeploy } # configuration file /etc/nginx/mime.types: types { text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/javascript js; application/atom+xml atom; application/rss+xml rss; text/mathml mml; text/plain txt; text/vnd.sun.j2me.app-descriptor jad; text/vnd.wap.wml wml; text/x-component htc; image/png png; image/tiff tif tiff; image/vnd.wap.wbmp wbmp; image/x-icon ico; image/x-jng jng; image/x-ms-bmp bmp; image/svg+xml svg svgz; image/webp webp; application/font-woff woff; application/java-archive jar war ear; application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; application/postscript ps eps ai; application/rtf rtf; application/vnd.apple.mpegurl m3u8; application/vnd.ms-excel xls; application/vnd.ms-fontobject eot; application/vnd.ms-powerpoint ppt; application/vnd.wap.wmlc wmlc; application/vnd.google-earth.kml+xml kml; application/vnd.google-earth.kmz kmz; application/x-7z-compressed 7z; application/x-cocoa cco; application/x-java-archive-diff jardiff; application/x-java-jnlp-file jnlp; application/x-makeself run; application/x-perl pl pm; application/x-pilot prc pdb; application/x-rar-compressed rar; application/x-redhat-package-manager rpm; application/x-sea sea; application/x-shockwave-flash swf; application/x-stuffit sit; application/x-tcl tcl tk; application/x-x509-ca-cert der pem crt; application/x-xpinstall xpi; application/xhtml+xml xhtml; application/xspf+xml xspf; application/zip zip; application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; application/octet-stream iso img; application/octet-stream msi msp msm; application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; application/vnd.openxmlformats-officedocument.presentationml.presentation pptx; audio/midi mid midi kar; audio/mpeg mp3; audio/ogg ogg; audio/x-m4a m4a; audio/x-realaudio ra; video/3gpp 3gpp 3gp; video/mp2t ts; video/mp4 mp4; video/mpeg mpeg mpg; video/quicktime mov; video/webm webm; video/x-flv flv; video/x-m4v m4v; video/x-mng mng; video/x-ms-asf asx asf; video/x-ms-wmv wmv; video/x-msvideo avi; } # configuration file /etc/nginx/conf.d/maps.conf: #Mapping upstream httpd ports map $scheme $cpport { http 9999; https 4430; } #Mapping $msec to $sec so that we dont break cPanel bandwidth calculator map $msec $sec { ~^(?P.+)\. $secres; } #Maps to be used with various cache templates #################################################################### map $request_method $requestnocache { default ""; POST 1; } map $http_cookie $wpcookienocache { default ""; SESS 1; PHPSESSID 1; ~*wordpress_[a-f0-9]+ 1; comment_author 1; wp-postpass 1; wordpress_no_cache 1; woocommerce_items_in_cart 1; resetpass 1; wordpress_logged_in 1; } map $http_cookie $drupalcookienocache { default ""; ~*SESS 1; } map $request_uri $wpurinocache { default ""; ~*^\/wp-admin\/.* 1; ~*^\/wp-[a-zA-Z0-9-]+\.php$ 1; ~*^\/feed\/.* 1; ~*^\/administrator\/.* 1; ~*^\/xmlrpc.php$ 1; ~*^\/index.php$ 1; ~*^\/wp-links-opml.php$ 1; ~*^\/wp-locations.php$ 1; ~*^\/sitemap(_index)?.xml 1; ~*^\/[a-z0-9_-]+-sitemap([0-9]+)?.xml 1; ~*^\/cart\/.* 1; ~*^\/my-account\/.* 1; ~*^\/wp-api\/.* 1; ~*^\/resetpass\/.* 1; } map $request_uri $drupalurinocache { default ""; ~*\/status\.php$ 1; ~*\/update\.php$ 1; ~*\/admin$ 1; ~*\/admin\/.*$ 1; ~*\/user$ 1; ~*\/user\/.* 1; ~*\/flag\/.* 1; ~*.*\/ajax\/.* 1; ~*.*\/ahah\/.* 1; ~*\/admin\/content\/backup_migrate\/export 1; } #Map for mobile user agent map $http_user_agent $ua_device { default 'desktop'; ~*(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge\ |maemo|midp|mmp|mobile.+firefox|netfront|opera\ m(ob|in)i|palm(\ os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows\ ce|xda|xiino/i 'mobile'; ~*android|ipad|playbook|silk/i 'tablet'; } #################################################################### # configuration file /etc/nginx/conf.d/maps-custom.conf: map $request_method $not_allowed_method { default 1; GET 0; HEAD 0; POST 0; } # GeoIP geoip_country /usr/share/GeoIP/GeoLiteCountry.dat; geoip_city /usr/share/GeoIP/GeoLiteCity.dat; map $geoip_country_code $allowed_country { default yes; RU no; CN no; UA no; } # configuration file /etc/nginx/conf.d/limit_request_custom.conf: limit_req_zone $binary_remote_addr zone=FLOODPROTECT:10m rate=10r/s; limit_req_zone $server_name zone=FLOODVHOST:20m rate=10r/s; limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; limit_req_zone $binary_remote_addr zone=two:10m rate=2r/s; limit_req_zone $binary_remote_addr zone=three:10m rate=3r/s; limit_conn_zone $binary_remote_addr zone=PERIP:10m; limit_conn_zone $server_name zone=PERSERVER:10m; limit_conn_zone $server_name zone=PERSERVERLOGIN:10m; limit_conn_zone $server_name zone=PERSERVERSEARCH:10m; # configuration file /etc/nginx/conf.d/custom_include.conf: #Referrer Spam Map include /etc/nginx/conf.d/spam_protection.conf; ## #IP blocks include /etc/nginx/conf.d/ip_blocks.conf; ## #IP blocks include /etc/nginx/conf.d/ip_blocks_layer7.conf; # Include netdata include /etc/nginx/conf.d/netdata.conf; # configuration file /etc/nginx/conf.d/spam_protection.conf: map $http_user_agent $bad_bot { default 0; ~*^Lynx 0; # Let Lynx go through ~*UptimeRobot/2.0 0; # Let UptimeRobot ~*bingbot/2.0 0; # Let bingbot ~*checkgzipcompression.com 0; # Let check gzip ~*ocsp.comodoca.com 0; # SSL comodo libwww-perl 1; ~*(?i)(\$x0E|\%0A|\%0D|\%27|\%3C|\%00|\@\$x|\!susie|\_irc|\_works|3gse|^4all|^4anything|^Buzzbot|a\_browser|^Yooplaabot|^ltx71|^python-requests|NerdyBot|^Vegi|^VegeBot) 1; } map $http_user_agent $scanners { default 0; "~*LinkedInBot" 0; "~*Discovery" 0; "~*Bloglovin" 1; "~*Jakarta" 1; "~*toCrawl/UrlDispatcher" 1; } map $http_referer $bad_referer { default 0; "~*pastebin.com" 1; "~*torrent" 1; "~*webxtrakt" 1; } map $remote_addr $denied { default 0; poneytelecom.eu 1; 185.62.189.113 1; 155.94.172.27 1; 1.54.43.166 1; 104.144.28.20 1; } # configuration file /etc/nginx/conf.d/ip_blocks.conf: deny 123.125.71.56/29; deny 1.233.43.75; deny 104.194.26.1; deny 104.194.26.128/26; # configuration file /etc/nginx/conf.d/ip_blocks_layer7.conf: deny 103.194.193.1/32; deny 103.194.193.2/31; deny 103.194.193.4/30; # configuration file /etc/nginx/conf.d/cdn_realip.conf: #CloudFlare set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 104.16.0.0/12; set_real_ip_from 108.162.192.0/18; set_real_ip_from 131.0.72.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 162.158.0.0/15; set_real_ip_from 172.64.0.0/13; set_real_ip_from 173.245.48.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 190.93.240.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 199.27.128.0/21; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2405:8100::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2c0f:f248::/32; set_real_ip_from 2a06:98c0::/29; #Incapsula set_real_ip_from 199.83.128.0/21; set_real_ip_from 198.143.32.0/19; set_real_ip_from 149.126.72.0/21; set_real_ip_from 103.28.248.0/22; set_real_ip_from 45.64.64.0/22; set_real_ip_from 185.11.124.0/22; set_real_ip_from 192.230.64.0/18; set_real_ip_from 107.154.126.0/24; set_real_ip_from 2a02:e980::/29; # configuration file /etc/nginx/conf.d/nginx_cache_custom.conf: # PROXY Micro-caching proxy_cache_path /tmpcachenginx levels=1:2 keys_zone=micro:300m inactive=240m max_size=5000m; #PROXY CACHE proxy_cache_path /var/cache/nginx/proxycache levels=1:2 keys_zone=PROXYCACHE:32m inactive=360m max_size=1000m; proxy_cache_key "$scheme$request_method$host$request_uri"; ################################# #FASTCGICACHE fastcgi_cache_path /var/cache/nginx/fastcgicache levels=1:2 keys_zone=FASTCGICACHE:32m inactive=60m max_size=512m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; # configuration file /etc/nginx/conf.auto/geoip.conf: # GeoIP # Add Following to /etc/nginx/conf.d/custom_include.conf to preserve in rpm upgrade. #geoip_country /usr/share/GeoIP/GeoLiteCountry.dat; #geoip_city /usr/share/GeoIP/GeoLiteCity.dat; 1 vhost as example # Redirects if any # The server blocks server { listen 11.0.5.21:80 ; server_name 22b-pit.com mail.22b-pit.com www.22b-pit.com; access_log /usr/local/apache/domlogs/22b-pit.com-bytes_log bytes_log buffer=32k flush=5m; add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; include /etc/nginx/conf.d/gzip.conf; # Allow "Well-Known URIs" as per RFC 5785 location ~* ^/.well-known/ { allow all; } # Allow "Well-Known URIs" as per RFC 5785 # Include NAXSI settings location ^~ /NaxsiRequestDenied { return 418; } # End Include NAXSI settings # Include any applications in subdirectory # End Include any applications in subdirectory include /etc/nginx/sites-enabled/22b-pit.com.manualconfig*; include /etc/nginx/sites-enabled/22b-pit.com.include; } server { listen 11.0.5.21:443 ssl http2 ; ssl_certificate /etc/nginx/ssl/22b-pit.com.crt; ssl_certificate_key /var/cpanel/ssl/installed/keys/cbb12_c8e7d_dd90f364f8f3c643df9fc97d3413d866.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_session_cache shared:SSL:10m; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_session_timeout 5m; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /var/cpanel/ssl/installed/cabundles/cPanel_Inc__681917bfb43af6b642178607e0b36ccc_1747526399.cabundle; resolver 213.186.33.99 80.20.9.50 8.8.4.4 valid=300s; resolver_timeout 5s; server_name 22b-pit.com mail.22b-pit.com www.22b-pit.com; access_log /usr/local/apache/domlogs/22b-pit.com-bytes_log bytes_log buffer=32k flush=5m; add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; include /etc/nginx/conf.d/brotli.conf; # Allow "Well-Known URIs" as per RFC 5785 location ~* ^/.well-known/ { allow all; } # Allow "Well-Known URIs" as per RFC 5785 # Include NAXSI settings location ^~ /NaxsiRequestDenied { return 418; } # End Include NAXSI settings # Include any applications in subdirectory # End Include any applications in subdirectory include /etc/nginx/sites-enabled/22b-pit.com.manualconfig*; include /etc/nginx/sites-enabled/22b-pit.com.include; } #Proxy to cPanel Apache httpd service root /home/fit3b/public_html; access_log off; location / { if ($bad_referer = 1) { rewrite ^(.*) https://www.filters.com/banspam/spam_traffic.html permanent; } if ($bad_bot = 1) { rewrite ^(.*) https://www.filters.com/banspam/badbot.html permanent; } if ($denied) { rewrite ^(.*) https://www.filters.com/banspam/denied.html permanent; } if ($scanners = 1) { rewrite ^(.*) https://www.filters.com/banspam/scanners.html permanent; } if ($allowed_country = no) { rewrite ^(.*) https://www.filters.com/banspam/country.html permanent; } if ($not_allowed_method) { rewrite ^(.*) https://www.filters.com/banspam/not_allowed.html permanent; } limit_conn PERIP 250; limit_conn PERSERVER 1000; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 300s; proxy_pass $scheme://11.0.5.21:$cpport; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Scheme $scheme; proxy_set_header Proxy ""; proxy_redirect off; } location ~ ^/(\.?!well-known|error_log|up\.php|CONTRIBUTING\.md|README\.md|LICENSES|readme\.html|readme\.txt|license\.txt|license\.html|wp-config\.php|xmlrpc\.php|config\.php|configure\.php|configuration\.php|testproxy\.php|sql|mySqlDumper|msd|jmx-console|jenkins|sys_cpanel|phpMyAdmin|sqlite|mysql|SQlite|sqlitemanager|SQLiteManager) { deny all; return 444; } location = /wp-login.php { if ($denied) { return 444; } if ($bad_referer = 1) { return 410; } if ($bad_bot = 1) { return 444; } if ($scanners = 1) { return 444; } if ($allowed_country = no) { return 444; } if ($http_user_agent = "") { rewrite ^(.*) https://www.filters.com/banspam/badbot.html permanent; } if ($not_allowed_method) { return 405; } limit_req zone=one burst=1 nodelay; limit_req_status 429; limit_conn PERIP 3; limit_conn PERSERVER 5; limit_conn_status 444; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 300s; proxy_pass $scheme://11.0.5.21:$cpport; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Scheme $scheme; proxy_set_header Proxy ""; proxy_redirect off; } location ~ ^/(robots\.txt|sitemap\.xml) { if ($denied) { return 444; } if ($bad_referer = 1) { return 410; } if ($bad_bot = 1) { return 444; } if ($scanners = 1) { return 444; } if ($allowed_country = no) { return 444; } if ($not_allowed_method) { return 405; } limit_req zone=two burst=2; limit_conn PERIP 4; limit_conn PERSERVER 100; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 300s; proxy_pass $scheme://11.0.5.21:$cpport; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Scheme $scheme; proxy_redirect off; proxy_cache micro; proxy_cache_lock on; proxy_cache_min_uses 2; proxy_cache_valid 200 5m; proxy_cache_use_stale updating; proxy_set_header Proxy ""; proxy_set_header Accept-Encoding ""; } location /modules/sendtoafriend/ { deny all; return 444; } location ~ ^/(search) { if ($denied) { return 444; } if ($bad_referer = 1) { return 410; } if ($bad_bot = 1) { return 444; } if ($scanners = 1) { return 444; } if ($allowed_country = no) { return 444; } if ($http_user_agent = "") { rewrite ^(.*) https://www.filters.com/banspam/badbot.html permanent; } if ($not_allowed_method) { return 405; } limit_conn PERIP 35; limit_conn PERSERVER 100; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 300s; proxy_pass $scheme://11.0.5.21:$cpport; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Scheme $scheme; proxy_set_header Proxy ""; proxy_redirect off; } location ~ ^/(login|order) { if ($denied) { return 444; } if ($bad_referer = 1) { return 410; } if ($bad_bot = 1) { return 444; } if ($scanners = 1) { return 444; } if ($allowed_country = no) { return 444; } if ($http_user_agent = "") { rewrite ^(.*) https://www.filters.com/banspam/badbot.html permanent; } if ($not_allowed_method) { return 405; } limit_req zone=two burst=5; limit_conn PERIP 12; limit_conn PERSERVERLOGIN 25; proxy_send_timeout 900; proxy_read_timeout 900; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 300s; proxy_pass $scheme://11.0.5.21:$cpport; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Scheme $scheme; proxy_set_header Proxy ""; proxy_redirect off; } ## Block SQL injections set $block_sql_injections 0; if ($query_string ~ "union.*select.*\(") { set $block_sql_injections 1; } if ($query_string ~ "union.*all.*select.*") { set $block_sql_injections 1; } if ($query_string ~ "concat.*\(") { set $block_sql_injections 1; } if ($block_sql_injections = 1) { return 520; } ## Block file injections set $block_file_injections 0; if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") { set $block_file_injections 1; } if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") { set $block_file_injections 1; } if ($block_file_injections = 1) { return 520; } ## Block common exploits set $block_common_exploits 0; if ($query_string ~ "(<|%3C).*script.*(>|%3E)") { set $block_common_exploits 1; } if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") { set $block_common_exploits 1; } if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") { set $block_common_exploits 1; } if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") { set $block_common_exploits 1; } if ($query_string ~ "base64_(en|de)code\(.*\)") { set $block_common_exploits 1; } if ($block_common_exploits = 1) { return 520; } ############################################### Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273469#msg-273469 From ajaygargnsit at gmail.com Fri Apr 7 13:45:51 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Fri, 7 Apr 2017 19:15:51 +0530 Subject: Login-Credentials based redirection? Message-ID: Hi All. We have setup multiple ssh-reverse tunnels, and our server will be listening to ports from 9000 to 10000. The server will have a public-IP, and we DO NOT want just anyone to look into any of the ports by trying :: http://1.2.3.4:9000 http://1.2.3.4:9001 and so on ... So, we are wondering if we can do something like this :: 1. User types in a URL, let's say https://1.2.3.4/index.html 2. The user gets presented with a login/password page. 3. Depending upon the credentials passed, the user gets "proxied" to the appropriate end-url. So, a user with credentials for port 9000 will ONLY be able to see http://1.2.3.4:9000 A user with credentials for port 9001 will ONLY be able to see http://1.2.3.4:9001, and so on .. An important point to note that the URLs http://1.2.3.4:9000, http://1.2.3.4:9001 must be not be directly accessible, else it defeats the purpose of authentication at first place. Is the above approach feasible logically and technically-via-nginx? Will be grateful for pointers. Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ceecko at gmail.com Fri Apr 7 15:07:58 2017 From: ceecko at gmail.com (Michal Kralik) Date: Fri, 7 Apr 2017 17:07:58 +0200 Subject: 99.999% my config works - then default_server is used In-Reply-To: <20170406215056.GH79252@lo0.su> References: <20170406215056.GH79252@lo0.su> Message-ID: Hi Ruslan, Thank you for your answer. This makes complete sense and is the reason why this was happening. It didn't occur to me before asking the question - thank you for your help! On Thu, Apr 6, 2017 at 11:50 PM, Ruslan Ermilov wrote: > On Thu, Apr 06, 2017 at 11:18:02PM +0200, Michal Kralik wrote: > > Hi, > > > > We're facing super strange issues with nginx 1.10.3 (CentOS 7, > > 3.10.0-327.36.3.el7.x86_64). Our config works ok, we get a lot of > traffic, > > but every day a couple of requests (5-10) don't get processes correctly > by > > a server directive with the defined server_name, but rather by a server > > directive with listen's "default_server". Here's the config > > https://paste.ngx.cc/29345124359e0447 > > > > Most of the time a request to 01.app.example.com gets processed > correctly, > > but every day a couple of requests get processed by the server directive > > from file conf.d/01-default.conf. There's nothing in the nginx system > logs > > around the time when this issue happens. > > > > The access.log from the default_server shows this > > (/data/logs/nginx/access.log): > > 1.1.1.1 - - [06/Apr/2017:12:20:57 +0000] "01.app.example.com" "GET > > /robots.txt HTTP/1.1" 302 161 "-" "Mozilla/5.0 (compatible; > Googlebot/2.1; + > > http://www.google.com/bot.html)" "-" > > > > It's clearly a redirect processed by the default_server, but the request > > should be rather processed by server directive in conf.d/02-apps.conf > > because the host header clearly states "01.app.example.com". > > > > Any ideas what's going on? > > Given the provided configuration snippet, this should happen > with all HTTPS requests to 01.app.example.com. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Fri Apr 7 15:41:56 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Fri, 7 Apr 2017 21:11:56 +0530 Subject: Login-Credentials based redirection? In-Reply-To: References: Message-ID: I have been searching if it is possible to do a one-to-one mapping, something like :: location /9000 { auth_basic auth_value username9000:password9000 proxy_pass http://localhost:9000 } location /9001 { auth_basic auth_value username9002:password9002 proxy_pass http://localhost:9001 } Also, ports 9000, 9001 will be firewalled from the public. Above will ensure that someone wanting to get access to say port 9000, will have to come via http://1.2.3.4/9000. and must know the credentials username9000:password9000 Do I make sense? If yes, can someone please point me out as to how to specify username:password on the URL basis. Thanks and Regards, Ajay On Fri, Apr 7, 2017 at 7:15 PM, Ajay Garg wrote: > Hi All. > > We have setup multiple ssh-reverse tunnels, and our server will be > listening to ports from 9000 to 10000. > The server will have a public-IP, and we DO NOT want just anyone to look > into any of the ports by trying :: > > http://1.2.3.4:9000 > http://1.2.3.4:9001 and so on ... > > > So, we are wondering if we can do something like this :: > > > 1. User types in a URL, let's say https://1.2.3.4/index.html > 2. The user gets presented with a login/password page. > 3. Depending upon the credentials passed, the user gets "proxied" to the > appropriate end-url. > > So, a user with credentials for port 9000 will ONLY be able to see > http://1.2.3.4:9000 > A user with credentials for port 9001 will ONLY be able to see > http://1.2.3.4:9001, and so on .. > > An important point to note that the URLs http://1.2.3.4:9000, > http://1.2.3.4:9001 must be not be directly accessible, else it defeats > the purpose of authentication at first place. > > Is the above approach feasible logically and technically-via-nginx? > > > Will be grateful for pointers. > > > Thanks and Regards, > Ajay > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Apr 7 15:53:39 2017 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Fri, 07 Apr 2017 11:53:39 -0400 Subject: Checking multiple caches before forwarding request to upstream In-Reply-To: <20170406145357.GT13617@mdounin.ru> References: <20170406145357.GT13617@mdounin.ru> Message-ID: <3be4e7d923b1ca773a68e14739df31a3.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, I found one way to make this work using lua to set the cache name. It seems to be working ok, all my tests passed. """ Lua Script local resty_md5 = require "resty.md5" local str = require "resty.string" local md5 = resty_md5:new(); local posix = require("posix") local days_30 = 1000 * 60 * 60 * 24 * 30 local days_90 = days_30 * 3 file_exists = function(path) file_stat = posix.stat(path); if file_stat == nil or file_stat.type ~= 'regular' then ngx.log(ngx.ERR, "file does not exists: ", path) return false end ngx.log(ngx.ERR, "file exists: ", path) return true; end pick_cache = function(last_modified) local now = ngx.now() if now - last_modified < days_30 then return "cache_recent" elseif now - last_modified < days_90 then return "cache_midterm" else return "cache_longterm" end end md5:update(ngx.var.request_uri) local digest = md5:final() local md5_str = str.to_hex(digest) ngx.log(ngx.ERR, "md5: ", md5_str) local cache_dirs = { "../cache_recent", "../cache_midterm", "../cache_longterm", } local file_name = string.sub(md5_str, -1) .. "/" .. string.sub(md5_str, -3, -2) .. "/" .. md5_str for cache_count = 1, #cache_dirs do if file_exists(cache_dirs[cache_count] .. '/' .. file_name) then cache_name = string.gsub(cache_dirs[cache_count], ".*/", "") return cache_name end end --return pick_cache(ngx.now() - 10) -- from request header --return pick_cache(ngx.now() - days_30) -- from request header return pick_cache(ngx.now() - days_90) -- from request header """ Nginx conf: set_by_lua_file $cache_name conf/cache_picker.lua; proxy_cache $cache_name; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273446,273473#msg-273473 From reallfqq-nginx at yahoo.fr Fri Apr 7 17:33:37 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 7 Apr 2017 19:33:37 +0200 Subject: Login-Credentials based redirection? In-Reply-To: References: Message-ID: You could use a map to match proxy_pass URI to user names, and then use a single password file for the auth_basic module. ?This removes the need of having specific location URI for each user, although you could still keep doing it if they are part of your requirements. --- *B. R.* On Fri, Apr 7, 2017 at 5:41 PM, Ajay Garg wrote: > I have been searching if it is possible to do a one-to-one mapping, > something like :: > > location /9000 > { > auth_basic > auth_value username9000:password9000 > proxy_pass http://localhost:9000 > } > > location /9001 > { > auth_basic > auth_value username9002:password9002 > proxy_pass http://localhost:9001 > } > > Also, ports 9000, 9001 will be firewalled from the public. > > > Above will ensure that someone wanting to get access to say port 9000, > will have to come via http://1.2.3.4/9000. and must know the credentials > username9000:password9000 > > Do I make sense? > If yes, can someone please point me out as to how to specify > username:password on the URL basis. > > > Thanks and Regards, > Ajay > > On Fri, Apr 7, 2017 at 7:15 PM, Ajay Garg wrote: > >> Hi All. >> >> We have setup multiple ssh-reverse tunnels, and our server will be >> listening to ports from 9000 to 10000. >> The server will have a public-IP, and we DO NOT want just anyone to look >> into any of the ports by trying :: >> >> http://1.2.3.4:9000 >> http://1.2.3.4:9001 and so on ... >> >> >> So, we are wondering if we can do something like this :: >> >> >> 1. User types in a URL, let's say https://1.2.3.4/index.html >> 2. The user gets presented with a login/password page. >> 3. Depending upon the credentials passed, the user gets "proxied" to the >> appropriate end-url. >> >> So, a user with credentials for port 9000 will ONLY be able to see >> http://1.2.3.4:9000 >> A user with credentials for port 9001 will ONLY be able to see >> http://1.2.3.4:9001, and so on .. >> >> An important point to note that the URLs http://1.2.3.4:9000, >> http://1.2.3.4:9001 must be not be directly accessible, else it defeats >> the purpose of authentication at first place. >> >> Is the above approach feasible logically and technically-via-nginx? >> >> >> Will be grateful for pointers. >> >> >> Thanks and Regards, >> Ajay >> > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Fri Apr 7 18:53:13 2017 From: larry.martell at gmail.com (Larry Martell) Date: Fri, 7 Apr 2017 14:53:13 -0400 Subject: threads don't run after request returns? Message-ID: I have a django app that I serve with nginx. Some requests that the app receives start python threads that are not complete when the request returns a response to the client. When I run with the django devel server the threads continue to run to completion. But when I run with nginx it seems that the threads terminate when the response is sent. Would that be expected or is there something else going on? If that is expected, is there a way I can get around this? From larry.martell at gmail.com Fri Apr 7 19:17:08 2017 From: larry.martell at gmail.com (Larry Martell) Date: Fri, 7 Apr 2017 15:17:08 -0400 Subject: threads don't run after request returns? In-Reply-To: References: Message-ID: On Fri, Apr 7, 2017 at 2:53 PM, Larry Martell wrote: > I have a django app that I serve with nginx. Some requests that the > app receives start python threads that are not complete when the > request returns a response to the client. When I run with the django > devel server the threads continue to run to completion. But when I run > with nginx it seems that the threads terminate when the response is > sent. Would that be expected or is there something else going on? If > that is expected, is there a way I can get around this? I have a bit more info on this ... the server was in the state where it seemed the threads were no longer running and then I restarted uwsgi and then the threads ran to completion! So now I am very confused - they clearly were still around, but seeming in some blocked state and they were able to resume when uwsgi restarted. Can anyone shed any light on what is going on? Or should I post on the uwsgi list? From al-nginx at none.at Fri Apr 7 23:42:05 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 08 Apr 2017 01:42:05 +0200 Subject: Configuring a subnet in an upstream server In-Reply-To: References: Message-ID: Am 03-04-2017 18:09, schrieb Dynastic Space: > I used a poor example. > The functionality I was interested in was adding a range of application > servers, all part of the same domain. I think you have the following options. .) or list every app server in the upstream block http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream .) DNS rr with resolver http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver .) in the commercial version is also a service=name and resolve possible http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server You can also use a script which creates from 192.168.0.0/16 a server line for the upstream and include it into the nginx.conf You can add in the log format the upstream address to see to which app server the request was forwarded. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables Regards Aleks > D > > On Mon, Apr 3, 2017 at 11:04 AM, B.R. via nginx > wrote: > > What would be the meaning of that? > > How do you route traffic to 192.168.0.0? Do you really want to send > requests to 192.168.255.255? > How would you handle requests sent to some servers (but not all) if > some are not responsive? > > I suspect what you want to use is dynamic IP addresses for your > backends. Good news: you can use domain names. > > --- > B. R. > > On Sun, Apr 2, 2017 at 8:42 AM, Dynastic Space > wrote: > > Is it possible to configure a collection of servers using subnet > notation in the upstream server knob? e.g. server 192.168.0.0/16. > > Thanks, > > Dynastic _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From fam6837 at gmail.com Sat Apr 8 07:46:57 2017 From: fam6837 at gmail.com (Musta Fa) Date: Sat, 8 Apr 2017 08:46:57 +0100 Subject: custom ip address on port 80 not listening Message-ID: nginx/1.11.13 i have 4 server IPs, all registered in network interface and many domains and apps, also on different ports. all other ports are ok: custom_ip1:443, custom_ip1:5000, custom_ip1:81 custom_ip2:443, custom_ip2:5000, custom_ip2:81 custom_ip3:443, custom_ip3:5000, custom_ip3:81 but i can not make it listen on custom_ip:80 port 80 is always 0.0.0.0:80 cd /etc/nginx grep listen -r * only "listen custom_ip:80;" why is that?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Apr 8 08:37:24 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 8 Apr 2017 09:37:24 +0100 Subject: custom ip address on port 80 not listening In-Reply-To: References: Message-ID: <20170408083724.GI3428@daoine.org> On Sat, Apr 08, 2017 at 08:46:57AM +0100, Musta Fa wrote: Hi there, > but i can not make it listen on custom_ip:80 > port 80 is always 0.0.0.0:80 > > cd /etc/nginx > grep listen -r * > only "listen custom_ip:80;" > > why is that?? Any server{} without an explicit "listen" has an implicit "listen *:80" when you run as root. Does nginx -T | grep -n 'server\|listen' show any obvious candidates? f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sat Apr 8 13:09:59 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sat, 8 Apr 2017 18:39:59 +0530 Subject: URL-Rewriting not working Message-ID: Hi All. When I setup the following, the authentication+proxying works perfect, with the url changing from http://1.2.3.4:2001 to http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up perfectly. ############################################################################ server { listen 2001; location / { auth_basic 'Restricted'; auth_basic_user_file /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; proxy_pass http://127.0.0.1:2000; } } ############################################################################# However, I am not able to do the proxying if I perform url-rewriting. Nothing of the following works :: a) ############################################################################ server { listen 2001; location /78 { auth_basic 'Restricted'; auth_basic_user_file /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; proxy_pass http://127.0.0.1:2000; } } ############################################################################ No URL change happens, and 404 (illegal-file-access) is obtained. b) ############################################################################ server { listen 2001; location /78 { auth_basic 'Restricted'; auth_basic_user_file /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; proxy_pass http://127.0.0.1:2000/; } } ############################################################################ No URL change happens, and 404 (illegal-file-access) is obtained. c) ############################################################################ server { listen 2001; location /78/ { auth_basic 'Restricted'; auth_basic_user_file /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; proxy_pass http://127.0.0.1:2000/; } } ############################################################################ The URL does changes from http://1.2.3.4:2001/78 to http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. d) ############################################################################ server { listen 2001; location /78/ { auth_basic 'Restricted'; auth_basic_user_file /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; proxy_pass http://127.0.0.1:2000; } } ############################################################################ No URL change happens, and 404 (illegal-file-access) is obtained. So, I guess c) is the closest to doing a url-rewrite, but I wonder why am I getting a 404, even though the URL-change is perfect. Any ideas please? Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sat Apr 8 13:19:41 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 8 Apr 2017 18:49:41 +0530 Subject: URL-Rewriting not working In-Reply-To: References: Message-ID: I think you are confusing between url-rewrite and location On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg wrote: > Hi All. > > When I setup the following, the authentication+proxying works perfect, > with the url changing from http://1.2.3.4:2001 to > http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up > perfectly. > > ############################################################ > ################ > server { > listen 2001; > location / { > > auth_basic 'Restricted'; > auth_basic_user_file /home/ > 2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000; > } > } > ############################################################ > ################# > > > > However, I am not able to do the proxying if I perform url-rewriting. > Nothing of the following works :: > > a) > ############################################################ > ################ > server { > listen 2001; > location /78 { > > auth_basic 'Restricted'; > auth_basic_user_file /home/ > 2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000; > } > } > ############################################################ > ################ > > No URL change happens, and 404 (illegal-file-access) is obtained. > > > b) > ############################################################ > ################ > server { > listen 2001; > location /78 { > > auth_basic 'Restricted'; > auth_basic_user_file /home/ > 2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000/; > } > } > ############################################################ > ################ > > No URL change happens, and 404 (illegal-file-access) is obtained. > > > c) > ############################################################ > ################ > server { > listen 2001; > location /78/ { > > auth_basic 'Restricted'; > auth_basic_user_file /home/ > 2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000/; > } > } > ############################################################ > ################ > > The URL does changes from http://1.2.3.4:2001/78 to > http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. > > > d) > ############################################################ > ################ > server { > listen 2001; > location /78/ { > > auth_basic 'Restricted'; > auth_basic_user_file /home/ > 2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000; > } > } > ############################################################ > ################ > > No URL change happens, and 404 (illegal-file-access) is obtained. > > > So, I guess c) is the closest to doing a url-rewrite, but I wonder why am > I getting a 404, even though the URL-change is perfect. > > > Any ideas please? > > > Thanks and Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sat Apr 8 13:24:16 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sat, 8 Apr 2017 18:54:16 +0530 Subject: URL-Rewriting not working In-Reply-To: References: Message-ID: Hi Anoop. As per http://serverfault.com/questions/379675/nginx-reverse-proxy-url-rewrite, the rewrite should be automatic. But it does not work for me :( On Sat, Apr 8, 2017 at 6:49 PM, Anoop Alias wrote: > I think you are confusing between url-rewrite and location > > On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg wrote: > >> Hi All. >> >> When I setup the following, the authentication+proxying works perfect, >> with the url changing from http://1.2.3.4:2001 to >> http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up >> perfectly. >> >> ############################################################ >> ################ >> server { >> listen 2001; >> location / { >> >> auth_basic 'Restricted'; >> auth_basic_user_file >> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >> proxy_pass http://127.0.0.1:2000; >> } >> } >> ############################################################ >> ################# >> >> >> >> However, I am not able to do the proxying if I perform url-rewriting. >> Nothing of the following works :: >> >> a) >> ############################################################ >> ################ >> server { >> listen 2001; >> location /78 { >> >> auth_basic 'Restricted'; >> auth_basic_user_file >> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >> proxy_pass http://127.0.0.1:2000; >> } >> } >> ############################################################ >> ################ >> >> No URL change happens, and 404 (illegal-file-access) is obtained. >> >> >> b) >> ############################################################ >> ################ >> server { >> listen 2001; >> location /78 { >> >> auth_basic 'Restricted'; >> auth_basic_user_file >> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >> proxy_pass http://127.0.0.1:2000/; >> } >> } >> ############################################################ >> ################ >> >> No URL change happens, and 404 (illegal-file-access) is obtained. >> >> >> c) >> ############################################################ >> ################ >> server { >> listen 2001; >> location /78/ { >> >> auth_basic 'Restricted'; >> auth_basic_user_file >> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >> proxy_pass http://127.0.0.1:2000/; >> } >> } >> ############################################################ >> ################ >> >> The URL does changes from http://1.2.3.4:2001/78 to >> http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. >> >> >> d) >> ############################################################ >> ################ >> server { >> listen 2001; >> location /78/ { >> >> auth_basic 'Restricted'; >> auth_basic_user_file >> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >> proxy_pass http://127.0.0.1:2000; >> } >> } >> ############################################################ >> ################ >> >> No URL change happens, and 404 (illegal-file-access) is obtained. >> >> >> So, I guess c) is the closest to doing a url-rewrite, but I wonder why am >> I getting a 404, even though the URL-change is perfect. >> >> >> Any ideas please? >> >> >> Thanks and Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > *Anoop P Alias* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sat Apr 8 13:40:33 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 8 Apr 2017 19:10:33 +0530 Subject: URL-Rewriting not working In-Reply-To: References: Message-ID: The 404 is thrown by whatever is working on port 2000 ;so you can check its access log and see On Sat, Apr 8, 2017 at 6:54 PM, Ajay Garg wrote: > Hi Anoop. > > As per http://serverfault.com/questions/379675/nginx- > reverse-proxy-url-rewrite, the rewrite should be automatic. > But it does not work for me :( > > On Sat, Apr 8, 2017 at 6:49 PM, Anoop Alias > wrote: > >> I think you are confusing between url-rewrite and location >> >> On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg wrote: >> >>> Hi All. >>> >>> When I setup the following, the authentication+proxying works perfect, >>> with the url changing from http://1.2.3.4:2001 to >>> http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up >>> perfectly. >>> >>> ############################################################ >>> ################ >>> server { >>> listen 2001; >>> location / { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>> proxy_pass http://127.0.0.1:2000; >>> } >>> } >>> ############################################################ >>> ################# >>> >>> >>> >>> However, I am not able to do the proxying if I perform url-rewriting. >>> Nothing of the following works :: >>> >>> a) >>> ############################################################ >>> ################ >>> server { >>> listen 2001; >>> location /78 { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>> proxy_pass http://127.0.0.1:2000; >>> } >>> } >>> ############################################################ >>> ################ >>> >>> No URL change happens, and 404 (illegal-file-access) is obtained. >>> >>> >>> b) >>> ############################################################ >>> ################ >>> server { >>> listen 2001; >>> location /78 { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>> proxy_pass http://127.0.0.1:2000/; >>> } >>> } >>> ############################################################ >>> ################ >>> >>> No URL change happens, and 404 (illegal-file-access) is obtained. >>> >>> >>> c) >>> ############################################################ >>> ################ >>> server { >>> listen 2001; >>> location /78/ { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>> proxy_pass http://127.0.0.1:2000/; >>> } >>> } >>> ############################################################ >>> ################ >>> >>> The URL does changes from http://1.2.3.4:2001/78 to >>> http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. >>> >>> >>> d) >>> ############################################################ >>> ################ >>> server { >>> listen 2001; >>> location /78/ { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>> proxy_pass http://127.0.0.1:2000; >>> } >>> } >>> ############################################################ >>> ################ >>> >>> No URL change happens, and 404 (illegal-file-access) is obtained. >>> >>> >>> So, I guess c) is the closest to doing a url-rewrite, but I wonder why >>> am I getting a 404, even though the URL-change is perfect. >>> >>> >>> Any ideas please? >>> >>> >>> Thanks and Regards, >>> Ajay >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> *Anoop P Alias* >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sat Apr 8 13:44:33 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sat, 8 Apr 2017 19:14:33 +0530 Subject: URL-Rewriting not working In-Reply-To: References: Message-ID: The same thing works if I put "/" in the location. The URL-change is the same, and things work seamlessly. It is definitely something that I am missing while specifying a location other than "/", that is causing incomplete proxying under the hood. On Sat, Apr 8, 2017 at 7:10 PM, Anoop Alias wrote: > The 404 is thrown by whatever is working on port 2000 ;so you can check > its access log and see > > > On Sat, Apr 8, 2017 at 6:54 PM, Ajay Garg wrote: > >> Hi Anoop. >> >> As per http://serverfault.com/questions/379675/nginx-reverse-proxy- >> url-rewrite, the rewrite should be automatic. >> But it does not work for me :( >> >> On Sat, Apr 8, 2017 at 6:49 PM, Anoop Alias >> wrote: >> >>> I think you are confusing between url-rewrite and location >>> >>> On Sat, Apr 8, 2017 at 6:39 PM, Ajay Garg >>> wrote: >>> >>>> Hi All. >>>> >>>> When I setup the following, the authentication+proxying works perfect, >>>> with the url changing from http://1.2.3.4:2001 to >>>> http://1.2.3.4:2001/cgi-bin/webproc, and the proxied0server opening up >>>> perfectly. >>>> >>>> ############################################################ >>>> ################ >>>> server { >>>> listen 2001; >>>> location / { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file >>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>>> proxy_pass http://127.0.0.1:2000; >>>> } >>>> } >>>> ############################################################ >>>> ################# >>>> >>>> >>>> >>>> However, I am not able to do the proxying if I perform url-rewriting. >>>> Nothing of the following works :: >>>> >>>> a) >>>> ############################################################ >>>> ################ >>>> server { >>>> listen 2001; >>>> location /78 { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file >>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>>> proxy_pass http://127.0.0.1:2000; >>>> } >>>> } >>>> ############################################################ >>>> ################ >>>> >>>> No URL change happens, and 404 (illegal-file-access) is obtained. >>>> >>>> >>>> b) >>>> ############################################################ >>>> ################ >>>> server { >>>> listen 2001; >>>> location /78 { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file >>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>>> proxy_pass http://127.0.0.1:2000/; >>>> } >>>> } >>>> ############################################################ >>>> ################ >>>> >>>> No URL change happens, and 404 (illegal-file-access) is obtained. >>>> >>>> >>>> c) >>>> ############################################################ >>>> ################ >>>> server { >>>> listen 2001; >>>> location /78/ { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file >>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>>> proxy_pass http://127.0.0.1:2000/; >>>> } >>>> } >>>> ############################################################ >>>> ################ >>>> >>>> The URL does changes from http://1.2.3.4:2001/78 to >>>> http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. >>>> >>>> >>>> d) >>>> ############################################################ >>>> ################ >>>> server { >>>> listen 2001; >>>> location /78/ { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file >>>> /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; >>>> proxy_pass http://127.0.0.1:2000; >>>> } >>>> } >>>> ############################################################ >>>> ################ >>>> >>>> No URL change happens, and 404 (illegal-file-access) is obtained. >>>> >>>> >>>> So, I guess c) is the closest to doing a url-rewrite, but I wonder why >>>> am I getting a 404, even though the URL-change is perfect. >>>> >>>> >>>> Any ideas please? >>>> >>>> >>>> Thanks and Regards, >>>> Ajay >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> >>> -- >>> *Anoop P Alias* >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > *Anoop P Alias* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sat Apr 8 13:48:37 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sat, 8 Apr 2017 19:18:37 +0530 Subject: Any harms in opening up multiple ports in listening-state in the context of Nginx? Message-ID: Hi All. I have been trying to do url-based redirections, but as per http://mailman.nginx.org/pipermail/nginx/2017-April/053443.html, there are issues cropping up. If the issues exist, then I will have to follow the one port for each proxy-url (constant "/" location) approach, instead of one port for all proxy-urls, using multiple locations to do the mapping approach. So, if I indeed need to open multiple ports to public in listening-state, is there any harm? Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Sat Apr 8 19:26:01 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 8 Apr 2017 19:26:01 +0000 Subject: Ticket #196 followup: disallow spaces in uri by default Message-ID: Hello list, in Ticket #196 [1], Maxim Dounin suggested that spaces in URI's could be disallowed by default. As far as I can tell, current code still does not "disallow" those requests (not by default and not via specific configuration either), is that correct? Could this be improved, as per the suggestion in the ticket? Nginx' behavior looks weird and inconsistent in case the HTTP request contains a unescaped "space followed by a uppercase H" and troubleshooting is more complicated because of it, take a look at this for example: https://github.com/peeringdb/peeringdb/issues/132 cheers, lukas [1] https://trac.nginx.org/nginx/ticket/196 From francis at daoine.org Sun Apr 9 10:49:31 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Apr 2017 11:49:31 +0100 Subject: URL-Rewriting not working In-Reply-To: References: Message-ID: <20170409104931.GJ3428@daoine.org> On Sat, Apr 08, 2017 at 06:39:59PM +0530, Ajay Garg wrote: Hi there, > However, I am not able to do the proxying if I perform url-rewriting. > Nothing of the following works :: Note that if you want to reverse-proxy a back-end web service at a different part of the url hierarchy to where it believes it is installed, in general you need the web service to help. That is, if you want the back-end / to correspond to the front-end /x/, then if the back-end ever links to something like /a, you will need that to become translated to /x/a before it leaves the front-end. In general, the front-end cannot do that translation. So you may find it easier to configure the back-end to be (or to act as if it is) installed below /x/ directly. Otherwise things can go wrong. What that means is... > a) > server { > listen 2001; > location /78 { > > auth_basic 'Restricted'; > auth_basic_user_file > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000; > } > } > ############################################################################ > > No URL change happens, and 404 (illegal-file-access) is obtained. If you request http://1.2.3.4:2001/78, nginx should request http://127.0.0.1:2000/78, and I guess that the back-end said 404. What do the back-end logs say? Can you show a specific "curl" command, with "-v" or "-i", that you can use to show this error case? > b) > ############################################################################ > server { > listen 2001; > location /78 { > > auth_basic 'Restricted'; > auth_basic_user_file > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000/; > } > } > ############################################################################ > > No URL change happens, and 404 (illegal-file-access) is obtained. If you request http://1.2.3.4:2001/78, nginx should request http://127.0.0.1:2000/. Does the 404 come from nginx or the back-end? What do the back-end logs say? (Did you request http://1.2.3.4:2001/78, or http://1.2.3.4:2001/78/ -- because the two urls arl different.) > c) > ############################################################################ > server { > listen 2001; > location /78/ { > > auth_basic 'Restricted'; > auth_basic_user_file > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000/; > } > } > ############################################################################ > > The URL does changes from http://1.2.3.4:2001/78 to > http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. If you request http://1.2.3.4:2001/78, nginx should return 301 redirecting you to http://1.2.3.4:2001/78/. If you then request http://1.2.3.4:2001/78/, nginx should request http://127.0.0.1:2000/. I guess that the back-end then returns 301 redirecting you to /cgi-bin/webproc. If you request http://1.2.3.4:2001/cgi-bin/webproc, then nginx should return 404 (because /cgi-bin/webproc does not start with /78/). Can you see all of those requests and responses, especially the ones involving the back-end? > d) > ############################################################################ > server { > listen 2001; > location /78/ { > > auth_basic 'Restricted'; > auth_basic_user_file > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > proxy_pass http://127.0.0.1:2000; > } > } > ############################################################################ > > No URL change happens, and 404 (illegal-file-access) is obtained. Similar to a) > So, I guess c) is the closest to doing a url-rewrite, but I wonder why am I > getting a 404, even though the URL-change is perfect. You have multiple possible configurations there. And you have not shown the details of the requests and responses. Can you show some requests that you want the client to make of nginx, and then show the matching requests that you want nginx to make of the back-end? You can use "curl" on the nginx machine to make similar requests of the back-end yourself, to see that actual response details. That might give a hint as to what, if any, proxy_redirect directives are needed. > Any ideas please? Can you configure the web service on port 2000 to believe that all of its useful urls are below /78/ ? If so, use configuration d). f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sun Apr 9 11:57:31 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 17:27:31 +0530 Subject: URL-Rewriting not working In-Reply-To: <20170409104931.GJ3428@daoine.org> References: <20170409104931.GJ3428@daoine.org> Message-ID: Hi Francis. Thanks for your detailed analysis. Unfortunately, our backen-service(s) (on port 2000 in the example) is ssh-reverse-tunnel, having two layers of machines behind them. The terminating-node for sure cannot be changed. Looking at your explanations, I guess then we will have to open a port for every service. So, for example, port 2001 for proxying to service running on ssh-tunnel at 2000, port 2003 for proxying to service running on ssh-tunnel at 2002, and so on. That brings me to my last question as per http://mailman.nginx.org/pipermail/nginx/2017-April/053448.html. If there isn't an issue with opening multiple nginx-listening-ports to the public, then I guess we are done. Would love to hear back your thoughts. Thanks and Regards, Ajay On Sun, Apr 9, 2017 at 4:19 PM, Francis Daly wrote: > On Sat, Apr 08, 2017 at 06:39:59PM +0530, Ajay Garg wrote: > > Hi there, > > > However, I am not able to do the proxying if I perform url-rewriting. > > Nothing of the following works :: > > Note that if you want to reverse-proxy a back-end web service at a > different part of the url hierarchy to where it believes it is installed, > in general you need the web service to help. > > That is, if you want the back-end / to correspond to the front-end /x/, > then if the back-end ever links to something like /a, you will need that > to become translated to /x/a before it leaves the front-end. In general, > the front-end cannot do that translation. > > So you may find it easier to configure the back-end to be (or to act as > if it is) installed below /x/ directly. > > Otherwise things can go wrong. > > What that means is... > > > a) > > server { > > listen 2001; > > location /78 { > > > > auth_basic 'Restricted'; > > auth_basic_user_file > > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > > proxy_pass http://127.0.0.1:2000; > > } > > } > > ############################################################ > ################ > > > > No URL change happens, and 404 (illegal-file-access) is obtained. > > If you request http://1.2.3.4:2001/78, nginx should request > http://127.0.0.1:2000/78, and I guess that the back-end said 404. > > What do the back-end logs say? > > Can you show a specific "curl" command, with "-v" or "-i", that you can > use to show this error case? > > > b) > > ############################################################ > ################ > > server { > > listen 2001; > > location /78 { > > > > auth_basic 'Restricted'; > > auth_basic_user_file > > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > > proxy_pass http://127.0.0.1:2000/; > > } > > } > > ############################################################ > ################ > > > > No URL change happens, and 404 (illegal-file-access) is obtained. > > If you request http://1.2.3.4:2001/78, nginx should request > http://127.0.0.1:2000/. Does the 404 come from nginx or the back-end? > > What do the back-end logs say? > > (Did you request http://1.2.3.4:2001/78, or http://1.2.3.4:2001/78/ -- > because the two urls arl different.) > > > c) > > ############################################################ > ################ > > server { > > listen 2001; > > location /78/ { > > > > auth_basic 'Restricted'; > > auth_basic_user_file > > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > > proxy_pass http://127.0.0.1:2000/; > > } > > } > > ############################################################ > ################ > > > > The URL does changes from http://1.2.3.4:2001/78 to > > http://1.2.3.4:2001/cgi-bin/webproc, but a 404 is obtained. > > If you request http://1.2.3.4:2001/78, nginx should return 301 > redirecting you to http://1.2.3.4:2001/78/. If you then request > http://1.2.3.4:2001/78/, nginx should request http://127.0.0.1:2000/. I > guess that the back-end then returns 301 redirecting you to > /cgi-bin/webproc. If you request http://1.2.3.4:2001/cgi-bin/webproc, > then nginx should return 404 (because /cgi-bin/webproc does not start > with /78/). > > Can you see all of those requests and responses, especially the ones > involving the back-end? > > > d) > > ############################################################ > ################ > > server { > > listen 2001; > > location /78/ { > > > > auth_basic 'Restricted'; > > auth_basic_user_file > > /home/2819163155b64c4c81f8608aa23c9faa/.htpasswd; > > proxy_pass http://127.0.0.1:2000; > > } > > } > > ############################################################ > ################ > > > > No URL change happens, and 404 (illegal-file-access) is obtained. > > Similar to a) > > > So, I guess c) is the closest to doing a url-rewrite, but I wonder why > am I > > getting a 404, even though the URL-change is perfect. > > You have multiple possible configurations there. And you have not shown > the details of the requests and responses. > > Can you show some requests that you want the client to make of nginx, > and then show the matching requests that you want nginx to make of > the back-end? > > You can use "curl" on the nginx machine to make similar requests of the > back-end yourself, to see that actual response details. That might give > a hint as to what, if any, proxy_redirect directives are needed. > > > Any ideas please? > > Can you configure the web service on port 2000 to believe that all of > its useful urls are below /78/ ? If so, use configuration d). > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Apr 9 12:28:54 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Apr 2017 13:28:54 +0100 Subject: URL-Rewriting not working In-Reply-To: References: <20170409104931.GJ3428@daoine.org> Message-ID: <20170409122854.GK3428@daoine.org> On Sun, Apr 09, 2017 at 05:27:31PM +0530, Ajay Garg wrote: Hi there, > Unfortunately, our backen-service(s) (on port 2000 in the example) is > ssh-reverse-tunnel, having two layers of machines behind them. The > terminating-node for sure cannot be changed. "In general", reverse-proxying to a different part of the url hierarchy needs back-end support. In the specific case of your system, maybe it does not need anything special. Only you can tell. > Looking at your explanations, I guess then we will have to open a port for > every service. > So, for example, port 2001 for proxying to service running on ssh-tunnel at > 2000, You could. Or, you could have nginx listening on public:2000 and proxy_pass'ing to local:2000, so you don't have to remember the public/private port mappings. Or you could have nginx listening on one port, and have multiple server{} blocks, so that userA connects to A.example.com which proxy_pass'es to local:2000; and userB connects to B.example.com which proxy_pass'es to local:2002. Or, you could (potentially) have nginx listening on one port, with one server{} block, where anyone who authenticates as userA is proxy_pass'ed to local:2000 and anyone who authenticates as userB is proxy_pass'ed to local:2002. In each of those cases, the reverse-proxying is not to a different part of the url hierarchy, so the original concern does not apply. Each case has its own costs and benefits, regarding future maintenance within nginx and external to nginx. All can work. Only you can decide which suits you best. > That brings me to my last question as per > http://mailman.nginx.org/pipermail/nginx/2017-April/053448.html. If there > isn't an issue with opening multiple nginx-listening-ports to the public, > then I guess we are done. Until you exhaust resources on your system, nginx does not care how many listening ports it opens. Good luck with it, f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sun Apr 9 12:46:37 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 18:16:37 +0530 Subject: URL-Rewriting not working In-Reply-To: <20170409122854.GK3428@daoine.org> References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> Message-ID: Hi Francis. Thanks a ton for your suggestions. On Sun, Apr 9, 2017 at 5:58 PM, Francis Daly wrote: > On Sun, Apr 09, 2017 at 05:27:31PM +0530, Ajay Garg wrote: > > Hi there, > > > Unfortunately, our backen-service(s) (on port 2000 in the example) is > > ssh-reverse-tunnel, having two layers of machines behind them. The > > terminating-node for sure cannot be changed. > > "In general", reverse-proxying to a different part of the url hierarchy > needs back-end support. In the specific case of your system, maybe it > does not need anything special. Only you can tell. > > > Looking at your explanations, I guess then we will have to open a port > for > > every service. > > So, for example, port 2001 for proxying to service running on ssh-tunnel > at > > 2000, > > You could. > > Or, you could have nginx listening on public:2000 and proxy_pass'ing > to local:2000, so you don't have to remember the public/private port > mappings. > > Or you could have nginx listening on one port, and have multiple server{} > blocks, so that userA connects to A.example.com which proxy_pass'es > to local:2000; and userB connects to B.example.com which proxy_pass'es > to local:2002. > I doubt I would be allowed to do this, since we would be using a fixed IP (instead of the costly multiple DNS-addresses). > > Or, you could (potentially) have nginx listening on one port, with one > server{} block, where anyone who authenticates as userA is proxy_pass'ed > to local:2000 and anyone who authenticates as userB is proxy_pass'ed > to local:2002. > I would be very much interested if this case is possible. Kindly let know how to do the proxy-routing based upon credentials. This will really solve our last core issue (opening multiple ports), while preserving all the feature-sets. So, will be grateful to hear back from you on how to implement this :) Once again, thanks a ton for the speedy, detailed responses !! Thanks and Regards, Ajay > > In each of those cases, the reverse-proxying is not to a different part > of the url hierarchy, so the original concern does not apply. > > Each case has its own costs and benefits, regarding future maintenance > within nginx and external to nginx. All can work. Only you can decide > which suits you best. > > > That brings me to my last question as per > > http://mailman.nginx.org/pipermail/nginx/2017-April/053448.html. If > there > > isn't an issue with opening multiple nginx-listening-ports to the public, > > then I guess we are done. > > Until you exhaust resources on your system, nginx does not care how many > listening ports it opens. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sun Apr 9 13:06:51 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 18:36:51 +0530 Subject: URL-Rewriting not working In-Reply-To: References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> Message-ID: Got it Francis !! server { listen 2001 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; if ($remote_user = "20da689b45c84f2b80bc84d651ed573f") { proxy_pass https://127.0.0.1:2000; } } } Love you !!! :P :P Thanks and Regards, Ajay On Sun, Apr 9, 2017 at 6:16 PM, Ajay Garg wrote: > Hi Francis. > > Thanks a ton for your suggestions. > > On Sun, Apr 9, 2017 at 5:58 PM, Francis Daly wrote: > >> On Sun, Apr 09, 2017 at 05:27:31PM +0530, Ajay Garg wrote: >> >> Hi there, >> >> > Unfortunately, our backen-service(s) (on port 2000 in the example) is >> > ssh-reverse-tunnel, having two layers of machines behind them. The >> > terminating-node for sure cannot be changed. >> >> "In general", reverse-proxying to a different part of the url hierarchy >> needs back-end support. In the specific case of your system, maybe it >> does not need anything special. Only you can tell. >> >> > Looking at your explanations, I guess then we will have to open a port >> for >> > every service. >> > So, for example, port 2001 for proxying to service running on >> ssh-tunnel at >> > 2000, >> >> You could. >> >> Or, you could have nginx listening on public:2000 and proxy_pass'ing >> to local:2000, so you don't have to remember the public/private port >> mappings. >> >> Or you could have nginx listening on one port, and have multiple server{} >> blocks, so that userA connects to A.example.com which proxy_pass'es >> to local:2000; and userB connects to B.example.com which proxy_pass'es >> to local:2002. >> > > I doubt I would be allowed to do this, since we would be using a fixed IP > (instead of the costly multiple DNS-addresses). > > > >> >> Or, you could (potentially) have nginx listening on one port, with one >> server{} block, where anyone who authenticates as userA is proxy_pass'ed >> to local:2000 and anyone who authenticates as userB is proxy_pass'ed >> to local:2002. >> > > I would be very much interested if this case is possible. > Kindly let know how to do the proxy-routing based upon credentials. > > This will really solve our last core issue (opening multiple ports), while > preserving all the feature-sets. > > So, will be grateful to hear back from you on how to implement this :) > > > Once again, thanks a ton for the speedy, detailed responses !! > > > Thanks and Regards, > Ajay > > >> >> In each of those cases, the reverse-proxying is not to a different part >> of the url hierarchy, so the original concern does not apply. >> >> Each case has its own costs and benefits, regarding future maintenance >> within nginx and external to nginx. All can work. Only you can decide >> which suits you best. >> >> > That brings me to my last question as per >> > http://mailman.nginx.org/pipermail/nginx/2017-April/053448.html. If >> there >> > isn't an issue with opening multiple nginx-listening-ports to the >> public, >> > then I guess we are done. >> >> Until you exhaust resources on your system, nginx does not care how many >> listening ports it opens. >> >> Good luck with it, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sun Apr 9 13:55:56 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 19:25:56 +0530 Subject: Mechanism to avoid restarting nginx upon every change Message-ID: Hi All. We are wanting to implement a solution, wherein the user gets proxied to the appropriate local-url, depending upon the credentials. Following architecture works like a charm (thanks a ton to francis at daoine.org, without whom I would not have been able to reach here) :: #################################################### server { listen 2000 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; if ($remote_user = "user1") { proxy_pass https://127.0.0.1:2001 ; } if ($remote_user = "user2") { proxy_pass https://127.0.0.1:2002 ; } # and so on .... } } #################################################### Things are good, except that adding any new user information requires reloading/restarting the nginx server, causing (however small) downtime. Can this be avoided? Can the above be implemented using some sort of database, so that the nginx itself does not have to be down, and the "remote_user <=> proxy_pass" mapping can be retrieved from a database instead? Will be grateful for pointers. Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sun Apr 9 13:59:05 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 9 Apr 2017 13:59:05 +0000 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: References: Message-ID: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Hi Ajay, If you generate the configuration, and issue a nginx reload ? it won't cause any downtime. The master process will reread the configuration, start new workers, and gracefully shut down the old ones. There's absolutely no downtime involved in this process. From: nginx > on behalf of Ajay Garg > Reply-To: "nginx at nginx.org" > Date: Sunday, 9 April 2017 at 15.55 To: "nginx at nginx.org" > Subject: Mechanism to avoid restarting nginx upon every change Hi All. We are wanting to implement a solution, wherein the user gets proxied to the appropriate local-url, depending upon the credentials. Following architecture works like a charm (thanks a ton tofrancis at daoine.org, without whom I would not have been able to reach here) :: #################################################### server { listen 2000 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; if ($remote_user = "user1") { proxy_pass https://127.0.0.1:2001; } if ($remote_user = "user2") { proxy_pass https://127.0.0.1:2002; } # and so on .... } } #################################################### Things are good, except that adding any new user information requires reloading/restarting the nginx server, causing (however small) downtime. Can this be avoided? Can the above be implemented using some sort of database, so that the nginx itself does not have to be down, and the "remote_user <=> proxy_pass" mapping can be retrieved from a database instead? Will be grateful for pointers. Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Sun Apr 9 14:25:38 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 19:55:38 +0530 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> References: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Message-ID: Thanks a ton Lucas. Just checked reloading, and the previous proxy-session was intact !! Thanks a ton again. And sorry I missed your name in the credits, you too had helped a greate deal yesterday, and today too !! Thanks a ton again !!! Thanks and Regards, Ajay On Sun, Apr 9, 2017 at 7:29 PM, Lucas Rolff wrote: > Hi Ajay, > > If you generate the configuration, and issue a nginx reload ? it won't > cause any downtime. The master process will reread the configuration, start > new workers, and gracefully shut down the old ones. > There's absolutely no downtime involved in this process. > > > From: nginx on behalf of Ajay Garg < > ajaygargnsit at gmail.com> > Reply-To: "nginx at nginx.org" > Date: Sunday, 9 April 2017 at 15.55 > To: "nginx at nginx.org" > Subject: Mechanism to avoid restarting nginx upon every change > > Hi All. > > We are wanting to implement a solution, wherein the user gets proxied to > the appropriate local-url, depending upon the credentials. > Following architecture works like a charm (thanks a ton to > francis at daoine.org, without whom I would not have been able to reach > here) :: > > #################################################### > server { > listen 2000 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /etc/nginx/ssl/.htpasswd; > > if ($remote_user = "user1") { > proxy_pass > https://127.0.0.1:2001 ; > } > > if ($remote_user = "user2") { > proxy_pass > https://127.0.0.1:2002 ; > } > > # and so on .... > > } > } > #################################################### > > > Things are good, except that adding any new user information requires > reloading/restarting the nginx server, causing (however small) downtime. > > Can this be avoided? > Can the above be implemented using some sort of database, so that the > nginx itself does not have to be down, and the "remote_user <=> proxy_pass" > mapping can be retrieved from a database instead? > > Will be grateful for pointers. > > > Thanks and Regards, > Ajay > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Apr 9 15:17:21 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 9 Apr 2017 16:17:21 +0100 Subject: URL-Rewriting not working In-Reply-To: References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> Message-ID: <20170409151721.GL3428@daoine.org> On Sun, Apr 09, 2017 at 06:36:51PM +0530, Ajay Garg wrote: Hi there, > Got it Francis !! Good news. > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; > > if ($remote_user = > "20da689b45c84f2b80bc84d651ed573f") { > proxy_pass > https://127.0.0.1:2000; > } > > } When you come to add the second user, you will see that you want one file with all the user/pass details. You will probably also see that it will be good to use a map (http://nginx.org/r/map) to set a variable for the port to connect to, based on $remote_user. Then your main config becomes just "proxy_pass http://127.0.0.1:$per_user_port;". Note that I have not tested that, and expect that there may be some more subtleties involved, such as perhaps requiring an explicit proxy_redirect directive. Note also that you will probably want to set a default value for $per_user_port, and make sure that something sensible happens when that value is used -- probably a response along the lines of "something isn't fully set up on the server yet; please wait or let us know", so the user is not confused. Good luck with it, f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sun Apr 9 15:37:02 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 9 Apr 2017 21:07:02 +0530 Subject: URL-Rewriting not working In-Reply-To: <20170409151721.GL3428@daoine.org> References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> <20170409151721.GL3428@daoine.org> Message-ID: Hi Francis. On Sun, Apr 9, 2017 at 8:47 PM, Francis Daly wrote: > On Sun, Apr 09, 2017 at 06:36:51PM +0530, Ajay Garg wrote: > > Hi there, > > > Got it Francis !! > > Good news. > > > location / { > > auth_basic 'Restricted'; > > auth_basic_user_file > > /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; > > > > if ($remote_user = > > "20da689b45c84f2b80bc84d651ed573f") { > > proxy_pass > > https://127.0.0.1:2000; > > } > > > > } > > When you come to add the second user, you will see that you want one > file with all the user/pass details. > Yes, I have already changed it to use just one file. Upon that, would not just multiple sections of "if" checks for $remote_user suffice, something like :: ######################################################################### server { listen 2000 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; if ($remote_user = "user1") { proxy_pass https://127.0.0.1:2001 ; } if ($remote_user = "user2") { proxy_pass https://127.0.0.1:2002 ; } # and so on .... } } ######################################################################### Looking forward to hearing back from you. Thanks and Regards, Ajay > > You will probably also see that it will be good to use a map > (http://nginx.org/r/map) to set a variable for the port to connect to, > based on $remote_user. Then your main config becomes just "proxy_pass > http://127.0.0.1:$per_user_port;". > > Note that I have not tested that, and expect that there may be some more > subtleties involved, such as perhaps requiring an explicit proxy_redirect > directive. > > Note also that you will probably want to set a default value for > $per_user_port, and make sure that something sensible happens when that > value is used -- probably a response along the lines of "something isn't > fully set up on the server yet; please wait or let us know", so the user > is not confused. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Sun Apr 9 15:47:07 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 9 Apr 2017 15:47:07 +0000 Subject: URL-Rewriting not working In-Reply-To: References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> <20170409151721.GL3428@daoine.org> Message-ID: <314BBB68-62C9-4593-BAC5-DD7587CFE8B1@lucasrolff.com> In general try to avoid using the if directive too much. https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ For what you're trying to do, using a map would be the cleanest (and nicest) way I believe ? someone can correct me if they want :-D From: nginx > on behalf of Ajay Garg > Reply-To: "nginx at nginx.org" > Date: Sunday, 9 April 2017 at 17.37 To: "nginx at nginx.org" > Subject: Re: URL-Rewriting not working Hi Francis. On Sun, Apr 9, 2017 at 8:47 PM, Francis Daly > wrote: On Sun, Apr 09, 2017 at 06:36:51PM +0530, Ajay Garg wrote: Hi there, > Got it Francis !! Good news. > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; > > if ($remote_user = > "20da689b45c84f2b80bc84d651ed573f") { > proxy_pass > https://127.0.0.1:2000; > } > > } When you come to add the second user, you will see that you want one file with all the user/pass details. Yes, I have already changed it to use just one file. Upon that, would not just multiple sections of "if" checks for $remote_user suffice, something like :: ######################################################################### server { listen 2000 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; if ($remote_user = "user1") { proxy_pass https://127.0.0.1:2001; } if ($remote_user = "user2") { proxy_pass https://127.0.0.1:2002; } # and so on .... } } ######################################################################### Looking forward to hearing back from you. Thanks and Regards, Ajay You will probably also see that it will be good to use a map (http://nginx.org/r/map) to set a variable for the port to connect to, based on $remote_user. Then your main config becomes just "proxy_pass http://127.0.0.1:$per_user_port;". Note that I have not tested that, and expect that there may be some more subtleties involved, such as perhaps requiring an explicit proxy_redirect directive. Note also that you will probably want to set a default value for $per_user_port, and make sure that something sensible happens when that value is used -- probably a response along the lines of "something isn't fully set up on the server yet; please wait or let us know", so the user is not confused. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From unsay.mono at yahoo.com Sun Apr 9 22:42:23 2017 From: unsay.mono at yahoo.com (Unsay Mono) Date: Sun, 9 Apr 2017 22:42:23 +0000 (UTC) Subject: nginScript and accessing cookies References: <1273096375.2849034.1491777743020.ref@mail.yahoo.com> Message-ID: <1273096375.2849034.1491777743020@mail.yahoo.com> Hey everyone In this nginx news article the author states: > With nginScript you can route traffic based on any data in the request, including cookies, headers, arguments, or any keywords in the request body. So far I've been unable to find any documentation on how to read and write to cookies from nginScript. Has anyone got an example they could share? Kind regards Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From unsay.mono at yahoo.com Sun Apr 9 22:45:05 2017 From: unsay.mono at yahoo.com (Unsay Mono) Date: Sun, 9 Apr 2017 22:45:05 +0000 (UTC) Subject: nginx ngx_http_js_module debug References: <2104030780.2833377.1491777905222.ref@mail.yahoo.com> Message-ID: <2104030780.2833377.1491777905222@mail.yahoo.com> I've noticed there is a second ngx_http_js_module which, based on the name, is used for debugging. Can someone shine some light on what exactly the debug version of the module does differently and how it's used? Kind regards Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dyke at gmail.com Mon Apr 10 02:00:33 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Sun, 9 Apr 2017 22:00:33 -0400 Subject: nginScript and accessing cookies In-Reply-To: <1273096375.2849034.1491777743020@mail.yahoo.com> References: <1273096375.2849034.1491777743020.ref@mail.yahoo.com> <1273096375.2849034.1491777743020@mail.yahoo.com> Message-ID: at first glance i thought this may be dead, but perhaps you'd should look here: https://www.nginx.com/blog/introduction-nginscript/, which supports both Plus and OSS versions. I've been working with the lua module via nginx-extras on ubuntu, they suit my needs, but that page may help you. Jeff On Sun, Apr 9, 2017 at 6:42 PM, Unsay Mono via nginx wrote: > Hey everyone > > In this > nginx > news article the author states: > > > With nginScript you can route traffic based on any data in the request, > including *cookies*, headers, arguments, or any keywords in the request > body. > > So far I've been unable to find any documentation on how to *read *and *write > *to cookies from nginScript. > > Has anyone got an example they could share? > > Kind regards > > Andreas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Mon Apr 10 04:53:42 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Mon, 10 Apr 2017 10:23:42 +0530 Subject: URL-Rewriting not working In-Reply-To: <314BBB68-62C9-4593-BAC5-DD7587CFE8B1@lucasrolff.com> References: <20170409104931.GJ3428@daoine.org> <20170409122854.GK3428@daoine.org> <20170409151721.GL3428@daoine.org> <314BBB68-62C9-4593-BAC5-DD7587CFE8B1@lucasrolff.com> Message-ID: Thanks a ton Lucas .. moved to map :) Thanks a ton again !!! Thanks and Regards, Ajay On Sun, Apr 9, 2017 at 9:17 PM, Lucas Rolff wrote: > In general try to avoid using the if directive too much. > https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ > > For what you're trying to do, using a map would be the cleanest (and > nicest) way I believe ? someone can correct me if they want :-D > > From: nginx on behalf of Ajay Garg < > ajaygargnsit at gmail.com> > Reply-To: "nginx at nginx.org" > Date: Sunday, 9 April 2017 at 17.37 > To: "nginx at nginx.org" > Subject: Re: URL-Rewriting not working > > Hi Francis. > > On Sun, Apr 9, 2017 at 8:47 PM, Francis Daly wrote: > >> On Sun, Apr 09, 2017 at 06:36:51PM +0530, Ajay Garg wrote: >> >> Hi there, >> >> > Got it Francis !! >> >> Good news. >> >> > location / { >> > auth_basic 'Restricted'; >> > auth_basic_user_file >> > /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; >> > >> > if ($remote_user = >> > "20da689b45c84f2b80bc84d651ed573f") { >> > proxy_pass >> > https://127.0.0.1:2000; >> > } >> > >> > } >> >> When you come to add the second user, you will see that you want one >> file with all the user/pass details. >> > > > Yes, I have already changed it to use just one file. > Upon that, would not just multiple sections of "if" checks for > $remote_user suffice, something like :: > > ######################################################################### > server { > listen 2000 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /etc/nginx/ssl/.htpasswd; > > if ($remote_user = "user1") { > proxy_pass > https://127.0.0.1:2001 ; > } > > if ($remote_user = "user2") { > proxy_pass > https://127.0.0.1:2002 ; > } > > # and so on .... > > } > } > ######################################################################### > > Looking forward to hearing back from you. > > > Thanks and Regards, > Ajay > > > > >> >> You will probably also see that it will be good to use a map >> (http://nginx.org/r/map) to set a variable for the port to connect to, >> based on $remote_user. Then your main config becomes just "proxy_pass >> http://127.0.0.1:$per_user_port;". >> >> Note that I have not tested that, and expect that there may be some more >> subtleties involved, such as perhaps requiring an explicit proxy_redirect >> directive. >> >> Note also that you will probably want to set a default value for >> $per_user_port, and make sure that something sensible happens when that >> value is used -- probably a response along the lines of "something isn't >> fully set up on the server yet; please wait or let us know", so the user >> is not confused. >> >> Good luck with it, >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Apr 10 07:34:13 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 10 Apr 2017 09:34:13 +0200 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: References: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Message-ID: You could have got your answer yourself by Reading The... Fine? Manual: https://nginx.org/en/docs/control.html There are tons of interesting pieces of informations there, by the nature of said docs... ?I suggest you take a look at everything: https://nginx.org/en/docs/? --- *B. R.* On Sun, Apr 9, 2017 at 4:25 PM, Ajay Garg wrote: > Thanks a ton Lucas. > > Just checked reloading, and the previous proxy-session was intact !! > Thanks a ton again. > > And sorry I missed your name in the credits, you too had helped a greate > deal yesterday, and today too !! > Thanks a ton again !!! > > > Thanks and Regards, > Ajay > > On Sun, Apr 9, 2017 at 7:29 PM, Lucas Rolff wrote: > >> Hi Ajay, >> >> If you generate the configuration, and issue a nginx reload ? it won't >> cause any downtime. The master process will reread the configuration, start >> new workers, and gracefully shut down the old ones. >> There's absolutely no downtime involved in this process. >> >> >> From: nginx on behalf of Ajay Garg < >> ajaygargnsit at gmail.com> >> Reply-To: "nginx at nginx.org" >> Date: Sunday, 9 April 2017 at 15.55 >> To: "nginx at nginx.org" >> Subject: Mechanism to avoid restarting nginx upon every change >> >> Hi All. >> >> We are wanting to implement a solution, wherein the user gets proxied to >> the appropriate local-url, depending upon the credentials. >> Following architecture works like a charm (thanks a ton to >> francis at daoine.org, without whom I would not have been able to reach >> here) :: >> >> #################################################### >> server { >> listen 2000 ssl; >> >> ssl_certificate /etc/nginx/ssl/nginx.crt; >> ssl_certificate_key /etc/nginx/ssl/nginx.key; >> >> location / { >> auth_basic 'Restricted'; >> auth_basic_user_file >> /etc/nginx/ssl/.htpasswd; >> >> if ($remote_user = "user1") { >> proxy_pass >> https://127.0.0.1:2001 ; >> } >> >> if ($remote_user = "user2") { >> proxy_pass >> https://127.0.0.1:2002 ; >> } >> >> # and so on .... >> >> } >> } >> #################################################### >> >> >> Things are good, except that adding any new user information requires >> reloading/restarting the nginx server, causing (however small) downtime. >> >> Can this be avoided? >> Can the above be implemented using some sort of database, so that the >> nginx itself does not have to be down, and the "remote_user <=> proxy_pass" >> mapping can be retrieved from a database instead? >> >> Will be grateful for pointers. >> >> >> Thanks and Regards, >> Ajay >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Apr 10 08:42:25 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Mon, 10 Apr 2017 04:42:25 -0400 Subject: Nginx - API Gateway is not forwarding the request to Auth Service Message-ID: <0a3932e2b3be214dd172c5ffec78b969.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to implement the NGINX API gateway in nginx 1.10.3 community version. I am facing the issue that NGINX is not forwarding the request to authentication service. nginx configuration is pasted at the end of this thread. I have written authentication service which is listening for login requests on /login. My protected application has no login page and responds with 401 status if its tried to be accessed without login in authentication service. Now according to the nginx auth_request module, if the protected applicaiton throws 401 status then NGINX forwards the request to authentication service for login and after successful login the request is forwarded back to the backend server. BUT, this is not happening for my configuration. When I try to access my nginx application it is not forwarding the request to auth service. I have verified this by looking at the response headers which has the custom sessionid name of protected application only. I have checked my installation and it shows that --with-http_auth_request_module is compiled in my installation NGINX Configuraiton : http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; # proxy_intercept_errors on; server { listen 8180; location /{ auth_request /login; proxy_pass http://adi-backend; } location /api { auth_request /login; proxy_pass http://adi-backend; } location = /login { proxy_pass http://localhost:8080/login; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } upstream adi-backend { server localhost:8280; } upstream authserv { server localhost:8380; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273515,273515#msg-273515 From mdounin at mdounin.ru Mon Apr 10 12:12:37 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Apr 2017 15:12:37 +0300 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <0a3932e2b3be214dd172c5ffec78b969.NginxMailingListEnglish@forum.nginx.org> References: <0a3932e2b3be214dd172c5ffec78b969.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170410121237.GE13617@mdounin.ru> Hello! On Mon, Apr 10, 2017 at 04:42:25AM -0400, zaidahmd wrote: > I am trying to implement the NGINX API gateway in nginx 1.10.3 community > version. I am facing the issue that NGINX is not forwarding the request to > authentication service. nginx configuration is pasted at the end of this > thread. > > I have written authentication service which is listening for login requests > on /login. > My protected application has no login page and responds with 401 status if > its tried to be accessed without login in authentication service. > > Now according to the nginx auth_request module, if the protected applicaiton > throws 401 status then NGINX forwards the request to authentication service > for login and after successful login the request is forwarded back to the > backend server. You misunderstood what auth_request does. Instead, it issues a subrequest for every incoming request, and allows further processing of the request if and only if the subrequest returns 200. No attempts are made to look into the response returned for the original request, that is, "protected application". Quoting the documentation, http://nginx.org/en/docs/http/ngx_http_auth_request_module.html: : The ngx_http_auth_request_module module (1.5.4+) implements : client authorization based on the result of a subrequest. : If the subrequest returns a 2xx response code, the access is : allowed. If it returns 401 or 403, the access is denied with the : corresponding error code. Any other response code returned by the : subrequest is considered an error. That is, the only thing which is expected to happen in your configuration is a subrequest to "/login" for every request. If this subrequest returns 200, access will be allowed for the original request. If it returns anything else, access will be denied. -- Maxim Dounin http://nginx.org/ From pluknet at nginx.com Mon Apr 10 17:16:54 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 10 Apr 2017 20:16:54 +0300 Subject: Nginx upstream server certificate verification In-Reply-To: References: Message-ID: > On 6 Apr 2017, at 21:46, shivramg94 wrote: > > Thank Sergey, for you response. > > I have one more question. If I have multiple upstream server host names in > the upstream server block, then how can I specify the specific upstream > server host name to which the request is being proxied, in the > proxy_ssl_name directive? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273295,273462#msg-273462 You could try to construct proxy_ssl_name based on upstream address, e.g.: map $upstream_addr $name { ~\Q192.0.2.1:8000\E$ first; ~\Q192.0.2.2:8000\E$ second; } proxy_ssl_name $name; Note well that $upstream_addr may contain multiple addresses, use it with a special care. See for details: http://nginx.org/r/$upstream_addr -- Sergey Kandaurov From alex at samad.com.au Mon Apr 10 21:31:05 2017 From: alex at samad.com.au (Alex Samad) Date: Mon, 10 Apr 2017 21:31:05 +0000 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: References: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Message-ID: But long live sessions are closed and I've had lua session information persist with a reload. Needed a restart A On Sun, 9 Apr 2017 at 21:35, B.R. via nginx wrote: > You could have got your answer yourself by Reading The... Fine? Manual: > https://nginx.org/en/docs/control.html > > There are tons of interesting pieces of informations there, by the nature > of said docs... > ?I suggest you take a look at everything: https://nginx.org/en/docs/? > --- > *B. R.* > > On Sun, Apr 9, 2017 at 4:25 PM, Ajay Garg wrote: > > Thanks a ton Lucas. > > Just checked reloading, and the previous proxy-session was intact !! > Thanks a ton again. > > And sorry I missed your name in the credits, you too had helped a greate > deal yesterday, and today too !! > Thanks a ton again !!! > > > Thanks and Regards, > Ajay > > On Sun, Apr 9, 2017 at 7:29 PM, Lucas Rolff wrote: > > Hi Ajay, > > If you generate the configuration, and issue a nginx reload ? it won't > cause any downtime. The master process will reread the configuration, start > new workers, and gracefully shut down the old ones. > There's absolutely no downtime involved in this process. > > > From: nginx on behalf of Ajay Garg < > ajaygargnsit at gmail.com> > Reply-To: "nginx at nginx.org" > Date: Sunday, 9 April 2017 at 15.55 > To: "nginx at nginx.org" > Subject: Mechanism to avoid restarting nginx upon every change > > Hi All. > > We are wanting to implement a solution, wherein the user gets proxied to > the appropriate local-url, depending upon the credentials. > Following architecture works like a charm (thanks a ton to > francis at daoine.org, without whom I would not have been able to reach > here) :: > > #################################################### > server { > listen 2000 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /etc/nginx/ssl/.htpasswd; > > if ($remote_user = "user1") { > proxy_pass > https://127.0.0.1:2001 ; > } > > if ($remote_user = "user2") { > proxy_pass > https://127.0.0.1:2002 ; > } > > # and so on .... > > } > } > #################################################### > > > Things are good, except that adding any new user information requires > reloading/restarting the nginx server, causing (however small) downtime. > > Can this be avoided? > Can the above be implemented using some sort of database, so that the > nginx itself does not have to be down, and the "remote_user <=> proxy_pass" > mapping can be retrieved from a database instead? > > Will be grateful for pointers. > > > Thanks and Regards, > Ajay > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Apr 11 06:04:32 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Tue, 11 Apr 2017 02:04:32 -0400 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <20170410121237.GE13617@mdounin.ru> References: <20170410121237.GE13617@mdounin.ru> Message-ID: <8813de41ce25c3c7744f12067e51f9c8.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for the response. As you said below, > Instead, it issues a > subrequest for every incoming request, and allows further > processing of the request if and only if the subrequest returns > 200. This means my authentication service will be getting a subrequest to /login everytime a request reaches nginx. And if the subrequest returns 401 then it means the user needs to login. So kindly help me that how can I show a login page if a subrequest throws 401 ? My authentication service is sending a redirect in response alongwith the 401 status? When I access my authentication service directly from browser without a logged-in session id, my browser is getting a 401 with redirect to /login which is login page. I want this same behaviour in my NGINX API gateway i.e. NGINX sends each request to API gateway if the request needs to be logged in a login page should be shown otherwise let the request access the resource. And I followed the nginx auth_request user guide to configure this as I showed my config in first thread. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273515,273523#msg-273523 From nginx-forum at forum.nginx.org Tue Apr 11 07:51:37 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Tue, 11 Apr 2017 03:51:37 -0400 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <8813de41ce25c3c7744f12067e51f9c8.NginxMailingListEnglish@forum.nginx.org> References: <20170410121237.GE13617@mdounin.ru> <8813de41ce25c3c7744f12067e51f9c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi maxim, After implementing your valuable inputs I came across the following error in my application. Error: "Request 'GET /login' doesn't match 'POST /login" this means that nginx is sending GET requests to authentication service. So the flow is as follows and is according to your valuable feedback. 1. I typed the nginx URL in browser "http://localhost:8180/api" 2. NGINX forwarded the request to "http://localhost:8080/login" 3. I get the above error in my auth service i.e "Request 'GET /login' doesn't match 'POST /login" This forwarded request to auth service should be a POST login Alternatively, if I type "http://localhost:8180/login", which is also a NGINX location, I am able to see login page of my auth service. I want the same for other nginx locaitons i.e. if the request is not authenticated show login page of auth service. get the user authenticated and then forward the requests to upstream server. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273515,273524#msg-273524 From nginx-forum at forum.nginx.org Tue Apr 11 08:28:54 2017 From: nginx-forum at forum.nginx.org (comput3rz) Date: Tue, 11 Apr 2017 04:28:54 -0400 Subject: [ Module development - Socket ] Message-ID: Hi, I need to develop a module which make a connexion with an external server. I tried to use simple TCP sockets available in the "sys/socket.h" (Linux) library but it appears that it wasn't a good idea as nothing was working correctly. Then, I searched in the source code keywords like "connection, socket" and I found the "ngx_connection.h" file which I suppose, is the API to be used if I want to do an external connection. Looking on the web for tutorials/resources using this API I found nothing more than unanswered stackoverflow threads... I'm pretty sure that including "sys/socket.h" into nginx is not the good solution as it makes ugly code and broke the pipes system which makes nginx so fast, could you please help me to find a solution ? Regards, XXX Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273525,273525#msg-273525 From reallfqq-nginx at yahoo.fr Tue Apr 11 11:15:20 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 11 Apr 2017 13:15:20 +0200 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: References: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Message-ID: I do not know anything about third-party modules. I'll let experts on the lua one answering that one. The baseline is: you should not need to. --- *B. R.* On Mon, Apr 10, 2017 at 11:31 PM, Alex Samad wrote: > But long live sessions are closed and I've had lua session information > persist with a reload. Needed a restart > > A > On Sun, 9 Apr 2017 at 21:35, B.R. via nginx wrote: > >> You could have got your answer yourself by Reading The... Fine? Manual: >> https://nginx.org/en/docs/control.html >> >> There are tons of interesting pieces of informations there, by the nature >> of said docs... >> ?I suggest you take a look at everything: https://nginx.org/en/docs/? >> --- >> *B. R.* >> >> On Sun, Apr 9, 2017 at 4:25 PM, Ajay Garg wrote: >> >> Thanks a ton Lucas. >> >> Just checked reloading, and the previous proxy-session was intact !! >> Thanks a ton again. >> >> And sorry I missed your name in the credits, you too had helped a greate >> deal yesterday, and today too !! >> Thanks a ton again !!! >> >> >> Thanks and Regards, >> Ajay >> >> On Sun, Apr 9, 2017 at 7:29 PM, Lucas Rolff wrote: >> >> Hi Ajay, >> >> If you generate the configuration, and issue a nginx reload ? it won't >> cause any downtime. The master process will reread the configuration, start >> new workers, and gracefully shut down the old ones. >> There's absolutely no downtime involved in this process. >> >> >> From: nginx on behalf of Ajay Garg < >> ajaygargnsit at gmail.com> >> Reply-To: "nginx at nginx.org" >> Date: Sunday, 9 April 2017 at 15.55 >> To: "nginx at nginx.org" >> Subject: Mechanism to avoid restarting nginx upon every change >> >> Hi All. >> >> We are wanting to implement a solution, wherein the user gets proxied to >> the appropriate local-url, depending upon the credentials. >> Following architecture works like a charm (thanks a ton to >> francis at daoine.org, without whom I would not have been able to reach >> here) :: >> >> #################################################### >> server { >> listen 2000 ssl; >> >> ssl_certificate /etc/nginx/ssl/nginx.crt; >> ssl_certificate_key /etc/nginx/ssl/nginx.key; >> >> location / { >> auth_basic 'Restricted'; >> auth_basic_user_file >> /etc/nginx/ssl/.htpasswd; >> >> if ($remote_user = "user1") { >> proxy_pass >> https://127.0.0.1:2001 ; >> } >> >> if ($remote_user = "user2") { >> proxy_pass >> https://127.0.0.1:2002 ; >> } >> >> # and so on .... >> >> } >> } >> #################################################### >> >> >> Things are good, except that adding any new user information requires >> reloading/restarting the nginx server, causing (however small) downtime. >> >> Can this be avoided? >> Can the above be implemented using some sort of database, so that the >> nginx itself does not have to be down, and the "remote_user <=> proxy_pass" >> mapping can be retrieved from a database instead? >> >> Will be grateful for pointers. >> >> >> Thanks and Regards, >> Ajay >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> -- >> Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Apr 11 11:48:26 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Apr 2017 12:48:26 +0100 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <8813de41ce25c3c7744f12067e51f9c8.NginxMailingListEnglish@forum.nginx.org> References: <20170410121237.GE13617@mdounin.ru> <8813de41ce25c3c7744f12067e51f9c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170411114826.GN3428@daoine.org> On Tue, Apr 11, 2017 at 02:04:32AM -0400, zaidahmd wrote: Hi there, > This means my authentication service will be getting a subrequest to /login > everytime a request reaches nginx. And if the subrequest returns 401 then it > means the user needs to login. I'm not sure what your system design is, but it sounds like it may not match the nginx auth_request design. If your application does its own authentication, then you possibly do not need nginx auth_request at all. Do things work for you if the nginx side is just "proxy_pass"? And, out of interest, which nginx auth_request user guide did you use for inspiration? It may be that that document can be improved for the next person. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Apr 11 14:10:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Apr 2017 17:10:16 +0300 Subject: [ Module development - Socket ] In-Reply-To: References: Message-ID: <20170411141016.GK13617@mdounin.ru> Hello! On Tue, Apr 11, 2017 at 04:28:54AM -0400, comput3rz wrote: > Hi, > > I need to develop a module which make a connexion with an external server. I > tried to use simple TCP sockets available > > in the "sys/socket.h" (Linux) library but it appears that it wasn't a good > idea as nothing was working correctly. > > Then, I searched in the source code keywords like "connection, socket" and I > found the "ngx_connection.h" file which I > > suppose, is the API to be used if I want to do an external connection. > Looking on the web for tutorials/resources using > > this API I found nothing more than unanswered stackoverflow threads... > > I'm pretty sure that including "sys/socket.h" into nginx is not the good > solution as it makes ugly code and broke the pipes > > system which makes nginx so fast, could you please help me to find a > solution ? Try looking into src/http/ngx_http_upstream.c for the upstream module implementation, the base module used by proxy and other modules which talk to backend servers. Basically, you are expected to use the ngx_event_connect_peer() function to connect to an external server, and then use appropriate event handling. Additional and probably simplier examples can be found in the mail module (src/mail/ngx_mail_auth_http_module.c, src/mail/ngx_mail_proxy_module.c), the stream module (src/stream/ngx_stream_proxy_module.c), as well as in OCSP stapling code (src/event/ngx_event_openssl_stapling.c). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Apr 11 14:19:34 2017 From: nginx-forum at forum.nginx.org (comput3rz) Date: Tue, 11 Apr 2017 10:19:34 -0400 Subject: [ Module development - Socket ] In-Reply-To: <20170411141016.GK13617@mdounin.ru> References: <20170411141016.GK13617@mdounin.ru> Message-ID: <5dcba18270215da85cbf10f641ade268.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim ! =) I found what I was looking for. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273525,273532#msg-273532 From mdounin at mdounin.ru Tue Apr 11 15:31:10 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Apr 2017 18:31:10 +0300 Subject: Ticket #196 followup: disallow spaces in uri by default In-Reply-To: References: Message-ID: <20170411153110.GL13617@mdounin.ru> Hello! On Sat, Apr 08, 2017 at 07:26:01PM +0000, Lukas Tribus wrote: > in Ticket #196 [1], Maxim Dounin suggested that spaces in URI's > could be disallowed by default. > > As far as I can tell, current code still does not "disallow" > those requests (not by default and not via specific > configuration either), is that correct? Yes. There were no changes in this area. > Could this be improved, as per the suggestion in the ticket? I think it is something to be considered in 1.13.x timeframe, as we have some plans to look into HTTP parser anyway. I think the main question here: is it ok to just drop support for spaces, or we have to introduce some option to preserve the old behaviour. -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Tue Apr 11 16:22:29 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 11 Apr 2017 16:22:29 +0000 Subject: AW: Ticket #196 followup: disallow spaces in uri by default In-Reply-To: <20170411153110.GL13617@mdounin.ru> References: , <20170411153110.GL13617@mdounin.ru> Message-ID: > I think the main question here: is it ok to just drop support for > spaces, or we have to introduce some option to preserve the old > behaviour. My opinion: I think we will need the configuration knob, so there is time to fix the problem, as a client bug is not always immediatly fixable. Either that or we do it like Apache (returning file abc when the request is GET /abc xyz HTTP/1.1), but that is still inconsistent and I don't like it personally. Thanks, Lukas From shirley at nginx.com Tue Apr 11 16:26:19 2017 From: shirley at nginx.com (Shirley Bailes) Date: Tue, 11 Apr 2017 09:26:19 -0700 Subject: =?UTF-8?Q?nginx=2Econf_2017=3A_Call_For_Papers_Now_Open_=E2=80=93_Submit_a?= =?UTF-8?Q?_Talk!?= Message-ID: Hello NGINX community ? We?re excited to announce that the call for proposals for the fourth NGINX conference, nginx.conf 2017 is open. Please submit a talk, and share the CFP with those you know who have good NGINX stories to share. *Deadline to submit: 11:59PM PT, May 8, 2017.* Our goal each year is to help attendees learn about NGINX use cases, insights, and best practices from real-world experts like you. This year, our theme is 'Architect the Future'. Tell us how you envision using NGINX with tomorrow's applications, or check out the suggested topics below for inspiration on the types of talks we?re looking for: - Architecture & Development - High-Performance Web - Operations & Deployment - Case Studies *Conference details: * - nginx.conf 2017 will be held in Portland, OR - Venue: The Nines Hotel - Dates: September 6-8 - Twitter: #nginxconf Let us know if you have any questions. Otherwise, we look forward to reading the stories you have to share. *s Shirley Bailes Director, Event Marketing Mobile: 707.569.4888 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Wed Apr 12 04:08:01 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 09:38:01 +0530 Subject: Mechanism to avoid restarting nginx upon every change In-Reply-To: References: <3E44F131-8C69-410F-AE08-BFF9E778693C@lucasrolff.com> Message-ID: On Mon, Apr 10, 2017 at 1:04 PM, B.R. via nginx wrote: > You could have got your answer yourself by Reading The... Fine? Manual: > https://nginx.org/en/docs/control.html > > There are tons of interesting pieces of informations there, by the nature > of said docs... > ?I suggest you take a look at everything: https://nginx.org/en/docs/? > Thanks B.R., I surely will !! > --- > *B. R.* > > On Sun, Apr 9, 2017 at 4:25 PM, Ajay Garg wrote: > >> Thanks a ton Lucas. >> >> Just checked reloading, and the previous proxy-session was intact !! >> Thanks a ton again. >> >> And sorry I missed your name in the credits, you too had helped a greate >> deal yesterday, and today too !! >> Thanks a ton again !!! >> >> >> Thanks and Regards, >> Ajay >> >> On Sun, Apr 9, 2017 at 7:29 PM, Lucas Rolff wrote: >> >>> Hi Ajay, >>> >>> If you generate the configuration, and issue a nginx reload ? it won't >>> cause any downtime. The master process will reread the configuration, start >>> new workers, and gracefully shut down the old ones. >>> There's absolutely no downtime involved in this process. >>> >>> >>> From: nginx on behalf of Ajay Garg < >>> ajaygargnsit at gmail.com> >>> Reply-To: "nginx at nginx.org" >>> Date: Sunday, 9 April 2017 at 15.55 >>> To: "nginx at nginx.org" >>> Subject: Mechanism to avoid restarting nginx upon every change >>> >>> Hi All. >>> >>> We are wanting to implement a solution, wherein the user gets proxied to >>> the appropriate local-url, depending upon the credentials. >>> Following architecture works like a charm (thanks a ton to >>> francis at daoine.org, without whom I would not have been able to reach >>> here) :: >>> >>> #################################################### >>> server { >>> listen 2000 ssl; >>> >>> ssl_certificate /etc/nginx/ssl/nginx.crt; >>> ssl_certificate_key /etc/nginx/ssl/nginx.key; >>> >>> location / { >>> auth_basic 'Restricted'; >>> auth_basic_user_file >>> /etc/nginx/ssl/.htpasswd; >>> >>> if ($remote_user = "user1") { >>> proxy_pass >>> https://127.0.0.1:2001 ; >>> } >>> >>> if ($remote_user = "user2") { >>> proxy_pass >>> https://127.0.0.1:2002 ; >>> } >>> >>> # and so on .... >>> >>> } >>> } >>> #################################################### >>> >>> >>> Things are good, except that adding any new user information requires >>> reloading/restarting the nginx server, causing (however small) downtime. >>> >>> Can this be avoided? >>> Can the above be implemented using some sort of database, so that the >>> nginx itself does not have to be down, and the "remote_user <=> proxy_pass" >>> mapping can be retrieved from a database instead? >>> >>> Will be grateful for pointers. >>> >>> >>> Thanks and Regards, >>> Ajay >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Wed Apr 12 04:37:46 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 11 Apr 2017 21:37:46 -0700 Subject: Windows 1024 Connections Limit Message-ID: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> Is there a technical reason for the 1,024 Connections Limit on Windows? http://nginx.org/en/docs/windows.html#known_issues Surely the OS can handle many more connections than that. This is not much of an issue with regular requests, but when you use WebSockets you can run out of connections very fast. Thanks, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Wed Apr 12 07:08:19 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 12:38:19 +0530 Subject: Multiple "channels" on forwarded port (with a ssh-reverse-tunnel behind) Message-ID: Hi All. Let's say, we have a server-block like ######################################################################## server { listen 2001 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /home/ 20da689b45c84f2b80bc84d651ed573f/.htpasswd; if ($remote_user = " 20da689b45c84f2b80bc84d651ed573f") { proxy_pass https://127.0.0.1:2000; } } } ######################################################################## and when a user opens the browser window. she authenticates, and is appropriately forwarded to port 2000 on the server. This port (2000) is in a LISTENING state on the server, created via a ssh-reverse-tunnel, through the command sshpass -p password ssh -N -R 0.0.0.0:2000:192.168.1.1:443 user at 1.2.3.4 from the remote-machine. Things work fine if only one user is forwarded to port 2000. However, I observe that if a second user logs into the server and provides the same auth-credentials, a 502-Bad-Gateway error is observed 99% of the times. Is this expected? Does the forwarding over a ssh-reverse-tunnelled-port work reliably only if one user is forwarded to the port? I am sorry if I am posting to the wrong list, not sure if this is a question related to nginx or ssh-reverse-tunnelling in general. Will be great to hear thoughts/experiences from the experts. Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Apr 12 09:50:56 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Wed, 12 Apr 2017 05:50:56 -0400 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <20170411114826.GN3428@daoine.org> References: <20170411114826.GN3428@daoine.org> Message-ID: Hi Francis, Thanks for your interest. I would certainly like to contribute for the future implementers of NGINX reverse proxy with API gateway functionality. Below I will explain my application with NGINX configuration I have performed and the code snippets/references for future users. If you like I can share my working sample application also as starter guide for custom java authentication gateway. Secondly, I am able to complete my configuraiton successfully and got it working and now in testing and expanding the functionality. 1. I have two Java applications written in Spring MVC framework. Both applications are as follows a. Secure Gateway b. Protected Application(s) 2. Client sends a request to nginx to access protected resource. http://nginx-server/employee 3. Nginx intercepts the request and first, sends the request to "Secure Gateway" for authentication. http://secure-gw/authenticate ************************************************************************** location /api { auth_request /authenticate; proxy_pass http://protected; } location = /authenticate { proxy_pass http://secure-gw; } *************************************************************************** 4. If the user is NOT a logged in user "Secure Gateway" throws a 401 exception. Otherwise 200 ok is sent as a response and NGINX forwards the request to "Protected Application" which returns the "employee" resource/page. 5. NGINX is configured to capture the error "401" and redirect the request to login page of SecureGateway using a location configured to cater 401 responses. *************************************************************************** location =/authfull { proxy_pass http://secure-gw/authenticate; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } error_page 401 authfull; *************************************************************************** 6. User inputs the credential into login page and submits the request to NGINX's location "/authenticate" using URL "http://nginx-server/authenticate". 7. "/authenticate" location is configured to forward the requests to "Secure Gateway" application without removing the request body i.e. carries the username and password from the form login page. 8(a)**. After successful login a token or session-id is generated and is stored in a common session store which is also accessible to the "Protected Application". 8(b)**. After successful login in "Secure Gateway" the secure gateway application sends an internal login request to "Protected Application" and gets the cookie response. Secure Gateway places two sessionIds(with different name) in response cookies and sends the response back to NGINX with HTTP 200 OK status. 9. NGINX forwards the authenticated/authorized requests to the "Protected Application" 10. "Protected Application" checks the presence of session id in cookie, validates the cookie and if valid, serves the request. 11. After the first request, whenever the user sends a new request to NGINX for a protected application resource, NGINX sends a subrequest to "/authenticate" location, "Secure Gateway" verifies its session id in cookie and if valid, nginx forwards the request to "Protected applicaiton" which also verifies the login session-id or token and serves the request if valid session. After frst request Secure Gateway only checks for the validity of its own session information. This time Secure Gateway will NOT send internal requests to Protected application. ***********************************************Image Reference of Design:********************************* URL : http://tinypic.com/r/35a2lbp/9 This image is missing one thing that its not showing the internal login request from secure gateway to protected application. For info, this request is made only once when the user is not a logged in user to secure gateway application. All subsequent requests does not involve this internal request flow. **********************************************NGINX Configuration*********************************************** http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; proxy_intercept_errors on; server { listen 8180; location /{ auth_request /authenticate; proxy_pass http://protected; } location = /authenticate { proxy_pass http://secure-gw; } location =/authfull { proxy_pass http://secure-gw/authenticate; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } error_page 401 authfull; } upstream protected { server localhost:8280; } upstream secure-gw { server localhost:8080; } } **********************************************************************Secure Gateway - SPRING MVC Security Configs******************************************** @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/authenticate").permitAll() .anyRequest().authenticated() .and() .formLogin() .successHandler(successHandler) .loginPage("/authenticate") .failureHandler(failureHandler) .permitAll() .and().csrf().disable();; } ************************************************************UI Login Form********************************
********************************************************************Protected Resource - Security Configs *********************************************** @Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated().and().formLogin() .successHandler(authenticationSuccessHandler).failureHandler(authenticationFailureHandler).and() .logout().permitAll().and().exceptionHandling().accessDeniedHandler(accessDeniedHandler()) .authenticationEntryPoint(authenticationEntryPoint).and().csrf().disable(); } **************************************************************************************************************************************************************** Hope, I am able to clarify what I am doing. I can share the sample code if you think its requried. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273515,273544#msg-273544 From nginx-forum at forum.nginx.org Wed Apr 12 11:40:25 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Wed, 12 Apr 2017 07:40:25 -0400 Subject: How to Edit/Delete own posted reply or topic Message-ID: Hi, I need to edit one reply to a topic in this list and need to know how to perform it. I cannot see any edit or delete button. I have been using stackoverflow where edits and delete is allowed to the users. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273547,273547#msg-273547 From ajaygargnsit at gmail.com Wed Apr 12 12:43:19 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 18:13:19 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue Message-ID: Hi All. We are facing the following issue : Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://1.2.3.4/. (Reason: CORS header 'Access-Control- Allow-Origin' missing). Have tried everything I could find on the google, but nothing works (whatever I do in /etc/nginx/sites-available/default) So, first question first, is it even possible to solve this issue on the version, as per the information below :: ######################################################## nginx -V nginx version: nginx/1.4.6 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/headers-more-nginx-module --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-auth-pam --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-cache-purge --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-development-kit --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-echo --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx-fancyindex --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-http-push --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-lua --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upload-progress --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upstream-fair --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx_http_substitutions_filter_module ########################################################## Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Wed Apr 12 12:51:56 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 18:21:56 +0530 Subject: Multiple "channels" on forwarded port (with a ssh-reverse-tunnel behind) In-Reply-To: References: Message-ID: Sorry for the idiotic question. Just checked, multiple sockets are created on each side of the ssh-reverse tunnel. So, seems the 502-Bad-Gateway error is due to other (network-slowness) issues. Sorry again. Thanks and Regards, Ajay On Wed, Apr 12, 2017 at 12:38 PM, Ajay Garg wrote: > Hi All. > > Let's say, we have a server-block like > > ######################################################################## > server { > listen 2001 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > location / { > auth_basic 'Restricted'; > auth_basic_user_file > /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd; > > if ($remote_user = > "20da689b45c84f2b80bc84d651ed573f") { > proxy_pass > https://127.0.0.1:2000; > } > > } > } > ######################################################################## > > > and when a user opens the browser window. she authenticates, and is > appropriately forwarded to port 2000 on the server. > This port (2000) is in a LISTENING state on the server, created via a > ssh-reverse-tunnel, through the command > > sshpass -p password ssh -N -R 0.0.0.0:2000:192.168.1.1:443 > user at 1.2.3.4 > > from the remote-machine. > > Things work fine if only one user is forwarded to port 2000. > However, I observe that if a second user logs into the server and provides > the same auth-credentials, a 502-Bad-Gateway error is observed 99% of the > times. > > Is this expected? > Does the forwarding over a ssh-reverse-tunnelled-port work reliably only > if one user is forwarded to the port? > > I am sorry if I am posting to the wrong list, not sure if this is a > question related to nginx or ssh-reverse-tunnelling in general. > Will be great to hear thoughts/experiences from the experts. > > > Thanks and Regards, > Ajay > > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 12 12:57:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Apr 2017 15:57:16 +0300 Subject: Windows 1024 Connections Limit In-Reply-To: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> Message-ID: <20170412125716.GP13617@mdounin.ru> Hello! On Tue, Apr 11, 2017 at 09:37:46PM -0700, Igal @ Lucee.org wrote: > Is there a technical reason for the 1,024 Connections Limit on Windows? > > http://nginx.org/en/docs/windows.html#known_issues > > Surely the OS can handle many more connections than that. On Windows, nginx uses select() system call to handle connection events. This syscall implies fixed-size bitmasks to pass file descriptors from userland to kernel and back. Size of these bitmasks can be only specified during compilation, and 1024 is the value nginx uses for official binaries to balance between maximum number of connections and unneeded overhead implied by large bitmasks. > This is not much of an issue with regular requests, but when you use > WebSockets you can run out of connections very fast. It is possible to recompile nginx with different value if you need to, see http://nginx.org/en/docs/howto_build_on_win32.html. On ther other hand, if you are using nginx in production I would recommend to consider using Unix variants instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 12 13:17:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Apr 2017 16:17:16 +0300 Subject: How to Edit/Delete own posted reply or topic In-Reply-To: References: Message-ID: <20170412131716.GR13617@mdounin.ru> Hello! On Wed, Apr 12, 2017 at 07:40:25AM -0400, zaidahmd wrote: > I need to edit one reply to a topic in this list and need to know how to > perform it. I cannot see any edit or delete button. > > I have been using stackoverflow where edits and delete is allowed to the > users. This is a mailing list, you can't edit or delete anything. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Apr 12 13:45:52 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 12 Apr 2017 09:45:52 -0400 Subject: How to Edit/Delete own posted reply or topic In-Reply-To: <20170412131716.GR13617@mdounin.ru> References: <20170412131716.GR13617@mdounin.ru> Message-ID: <859985469baedf1d5a815971939c1d7f.NginxMailingListEnglish@forum.nginx.org> You can if you use the web portal at https://forum.nginx.org/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273547,273556#msg-273556 From ajaygargnsit at gmail.com Wed Apr 12 14:48:45 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 20:18:45 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: For the record, here is the server-block :: ######################################################### server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; location / { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; proxy_set_header 'Access-Control-Max-Age' 1728000; proxy_set_header 'Access-Control-Allow-Origin' '*'; proxy_set_header 'Access-Control-Allow-Credentials' 'true'; proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; proxy_set_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; proxy_pass $forwarded_protocol://127.0.0.1: $forwarded_port; } } ######################################################### On Wed, Apr 12, 2017 at 6:13 PM, Ajay Garg wrote: > Hi All. > > We are facing the following issue : > > Cross-Origin Request Blocked: The Same Origin Policy disallows reading the > remote resource at https://1.2.3.4/. (Reason: CORS header 'Access-Control- > Allow-Origin' missing). > > Have tried everything I could find on the google, but nothing works > (whatever I do in /etc/nginx/sites-available/default) > > > So, first question first, is it even possible to solve this issue on the > version, as per the information below :: > > ######################################################## > nginx -V > nginx version: nginx/1.4.6 (Ubuntu) > built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) > TLS SNI support enabled > configure arguments: --with-cc-opt='-g -O2 -fstack-protector > --param=ssp-buffer-size=4 -Wformat -Werror=format-security > -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions > -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf > --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log > --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit > --with-ipv6 --with-http_ssl_module --with-http_stub_status_module > --with-http_realip_module --with-http_addition_module > --with-http_dav_module --with-http_flv_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_image_filter_module > --with-http_mp4_module --with-http_perl_module --with-http_random_index_module > --with-http_secure_link_module --with-http_spdy_module > --with-http_sub_module --with-http_xslt_module --with-mail > --with-mail_ssl_module --add-module=/build/nginx-9sG_ > hy/nginx-1.4.6/debian/modules/headers-more-nginx-module > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-auth-pam > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-cache-purge > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-dav-ext-module > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-development-kit > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-echo > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx-fancyindex > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-http-push > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-lua > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upload-progress > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upstream-fair > --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ > ngx_http_substitutions_filter_module > ########################################################## > > > > Thanks and Regards, > Ajay > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Wed Apr 12 15:00:15 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 12 Apr 2017 17:00:15 +0200 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Your are using auth_basic, so the 401 response code is not in the range that add_header works with ("Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307."). You need to use "always" if you want to include the header in all responses. See the documentation for more details. http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header On Wed, Apr 12, 2017 at 4:48 PM, Ajay Garg wrote: > For the record, here is the server-block :: > > > ######################################################### > server { > > listen 443 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > add_header 'Access-Control-Max-Age' 1728000; > add_header 'Access-Control-Allow-Origin' $http_origin; > add_header 'Access-Control-Allow-Credentials' 'true'; > add_header 'Access-Control-Allow-Methods' 'GET, POST, > OPTIONS'; > add_header 'Access-Control-Allow-Headers' > 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep- > Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache- > Control,Content-Type'; > > location / { > > add_header 'Access-Control-Max-Age' 1728000; > add_header 'Access-Control-Allow-Origin' '*'; > add_header 'Access-Control-Allow-Credentials' > 'true'; > add_header 'Access-Control-Allow-Methods' 'GET, > POST, OPTIONS'; > add_header 'Access-Control-Allow-Headers' > 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- > With,If-Modified-Since,Cache-Control,Content-Type'; > > auth_basic 'Restricted'; > auth_basic_user_file /etc/nginx/ssl/.htpasswd; > > proxy_set_header 'Access-Control-Max-Age' 1728000; > proxy_set_header 'Access-Control-Allow-Origin' '*'; > proxy_set_header 'Access-Control-Allow-Credentials' > 'true'; > proxy_set_header 'Access-Control-Allow-Methods' > 'GET, POST, OPTIONS'; > proxy_set_header 'Access-Control-Allow-Headers' > 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- > With,If-Modified-Since,Cache-Control,Content-Type'; > > proxy_pass $forwarded_protocol://127.0.0. > 1:$forwarded_port; > > } > } > ######################################################### > > On Wed, Apr 12, 2017 at 6:13 PM, Ajay Garg wrote: > >> Hi All. >> >> We are facing the following issue : >> >> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >> the remote resource at https://1.2.3.4/. (Reason: CORS header >> 'Access-Control- >> Allow-Origin' missing). >> >> Have tried everything I could find on the google, but nothing works >> (whatever I do in /etc/nginx/sites-available/default) >> >> >> So, first question first, is it even possible to solve this issue on the >> version, as per the information below :: >> >> ######################################################## >> nginx -V >> nginx version: nginx/1.4.6 (Ubuntu) >> built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) >> TLS SNI support enabled >> configure arguments: --with-cc-opt='-g -O2 -fstack-protector >> --param=ssp-buffer-size=4 -Wformat -Werror=format-security >> -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions >> -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf >> --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log >> --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid >> --http-client-body-temp-path=/var/lib/nginx/body >> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi >> --http-proxy-temp-path=/var/lib/nginx/proxy >> --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi >> --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module >> --with-http_stub_status_module --with-http_realip_module >> --with-http_addition_module --with-http_dav_module --with-http_flv_module >> --with-http_geoip_module --with-http_gzip_static_module >> --with-http_image_filter_module --with-http_mp4_module >> --with-http_perl_module --with-http_random_index_module >> --with-http_secure_link_module --with-http_spdy_module >> --with-http_sub_module --with-http_xslt_module --with-mail >> --with-mail_ssl_module --add-module=/build/nginx-9sG_ >> hy/nginx-1.4.6/debian/modules/headers-more-nginx-module >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-auth-pam >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-cache-purge >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-dav-ext-module >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-development-kit >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-echo >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx-fancyindex >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-http-push >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-lua >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upload-progress >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upstream-fair >> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ >> ngx_http_substitutions_filter_module >> ########################################################## >> >> >> >> Thanks and Regards, >> Ajay >> > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 12 15:19:28 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Apr 2017 18:19:28 +0300 Subject: nginx-1.12.0 Message-ID: <20170412151927.GV13617@mdounin.ru> Changes with nginx 1.12.0 12 Apr 2017 *) 1.12.x stable branch. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Wed Apr 12 15:52:33 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 12 Apr 2017 11:52:33 -0400 Subject: [nginx-announce] nginx-1.12.0 In-Reply-To: <20170412151933.GW13617@mdounin.ru> References: <20170412151933.GW13617@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.12.0 for Windows https://kevinworthington.com/nginxwin1120 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Wed, Apr 12, 2017 at 11:19 AM, Maxim Dounin wrote: > Changes with nginx 1.12.0 12 Apr > 2017 > > *) 1.12.x stable branch. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Wed Apr 12 17:24:42 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Wed, 12 Apr 2017 22:54:42 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Hi Richard. Thanks for the help. I added 'always' as the last argument in all the "add_header" and "proxy_set_header" directives. Unfortunately, I receive the following on the very first "add_header" directive :: ##################################################### 2017/04/12 17:18:22 [emerg] 28540#0: invalid number of arguments in "add_header" directive in /etc/nginx/sites-enabled/default:22 ##################################################### I guess the 'always' argument requires nginx >= 1.7.5. Is there a pre-built package available for nginx? Our linux-machine is :: ##################################################### uname -a Linux proxy 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux ##################################################### If not, I guess the link to use is http://nginx.org/en/docs/configure.html, but I am very afraid that I might miss something, so a pre-built package >= 1.7.5 (provided one exists) for our linux-machine would be great :) Thanks for the help so far !!! Thanks and Regards, Ajay On Wed, Apr 12, 2017 at 8:30 PM, Richard Stanway wrote: > Your are using auth_basic, so the 401 response code is not in the range > that add_header works with ("Adds the specified field to a response header > provided that the response code equals 200, 201, 204, 206, 301, 302, 303, > 304, or 307."). You need to use "always" if you want to include the header > in all responses. See the documentation for more details. > > http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header > > On Wed, Apr 12, 2017 at 4:48 PM, Ajay Garg wrote: > >> For the record, here is the server-block :: >> >> >> ######################################################### >> server { >> >> listen 443 ssl; >> >> ssl_certificate /etc/nginx/ssl/nginx.crt; >> ssl_certificate_key /etc/nginx/ssl/nginx.key; >> >> add_header 'Access-Control-Max-Age' 1728000; >> add_header 'Access-Control-Allow-Origin' $http_origin; >> add_header 'Access-Control-Allow-Credentials' 'true'; >> add_header 'Access-Control-Allow-Methods' 'GET, POST, >> OPTIONS'; >> add_header 'Access-Control-Allow-Headers' >> 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive, >> User-Agent,X-Requested-With,If-Modified-Since,Cache-Contro >> l,Content-Type'; >> >> location / { >> >> add_header 'Access-Control-Max-Age' 1728000; >> add_header 'Access-Control-Allow-Origin' '*'; >> add_header 'Access-Control-Allow-Credentials' >> 'true'; >> add_header 'Access-Control-Allow-Methods' 'GET, >> POST, OPTIONS'; >> add_header 'Access-Control-Allow-Headers' >> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With, >> If-Modified-Since,Cache-Control,Content-Type'; >> >> auth_basic 'Restricted'; >> auth_basic_user_file /etc/nginx/ssl/.htpasswd; >> >> proxy_set_header 'Access-Control-Max-Age' 1728000; >> proxy_set_header 'Access-Control-Allow-Origin' >> '*'; >> proxy_set_header 'Access-Control-Allow-Credentials' >> 'true'; >> proxy_set_header 'Access-Control-Allow-Methods' >> 'GET, POST, OPTIONS'; >> proxy_set_header 'Access-Control-Allow-Headers' >> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With, >> If-Modified-Since,Cache-Control,Content-Type'; >> >> proxy_pass $forwarded_protocol://127.0.0. >> 1:$forwarded_port; >> >> } >> } >> ######################################################### >> >> On Wed, Apr 12, 2017 at 6:13 PM, Ajay Garg >> wrote: >> >>> Hi All. >>> >>> We are facing the following issue : >>> >>> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >>> the remote resource at https://1.2.3.4/. (Reason: CORS header >>> 'Access-Control- >>> Allow-Origin' missing). >>> >>> Have tried everything I could find on the google, but nothing works >>> (whatever I do in /etc/nginx/sites-available/default) >>> >>> >>> So, first question first, is it even possible to solve this issue on the >>> version, as per the information below :: >>> >>> ######################################################## >>> nginx -V >>> nginx version: nginx/1.4.6 (Ubuntu) >>> built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) >>> TLS SNI support enabled >>> configure arguments: --with-cc-opt='-g -O2 -fstack-protector >>> --param=ssp-buffer-size=4 -Wformat -Werror=format-security >>> -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions >>> -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf >>> --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log >>> --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid >>> --http-client-body-temp-path=/var/lib/nginx/body >>> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi >>> --http-proxy-temp-path=/var/lib/nginx/proxy >>> --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi >>> --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module >>> --with-http_stub_status_module --with-http_realip_module >>> --with-http_addition_module --with-http_dav_module --with-http_flv_module >>> --with-http_geoip_module --with-http_gzip_static_module >>> --with-http_image_filter_module --with-http_mp4_module >>> --with-http_perl_module --with-http_random_index_module >>> --with-http_secure_link_module --with-http_spdy_module >>> --with-http_sub_module --with-http_xslt_module --with-mail >>> --with-mail_ssl_module --add-module=/build/nginx-9sG_ >>> hy/nginx-1.4.6/debian/modules/headers-more-nginx-module >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-auth-pam >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-cache-purge >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-dav-ext-module >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-development-kit >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-echo >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx-fancyindex >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-http-push >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-lua >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upload-progress >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upstream-fair >>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ >>> ngx_http_substitutions_filter_module >>> ########################################################## >>> >>> >>> >>> Thanks and Regards, >>> Ajay >>> >> >> >> >> -- >> Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Wed Apr 12 17:42:02 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Wed, 12 Apr 2017 10:42:02 -0700 Subject: Windows 1024 Connections Limit In-Reply-To: <20170412125716.GP13617@mdounin.ru> References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> <20170412125716.GP13617@mdounin.ru> Message-ID: <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> Maxim, On 4/12/2017 5:57 AM, Maxim Dounin wrote: > > On Windows, nginx uses select() system call to handle connection > events. This syscall implies fixed-size bitmasks to pass file > descriptors from userland to kernel and back. Size of these > bitmasks can be only specified during compilation, and 1024 is the > value nginx uses for official binaries to balance between maximum > number of connections and unneeded overhead implied by large > bitmasks. > > It is possible to recompile nginx with different value if you need > to, see http://nginx.org/en/docs/howto_build_on_win32.html. > > On ther other hand, if you are using nginx in production I would > recommend to consider using Unix variants instead. Thank you very much for the explanation and recommendation, Igal From nginx-forum at forum.nginx.org Wed Apr 12 17:53:41 2017 From: nginx-forum at forum.nginx.org (SebK) Date: Wed, 12 Apr 2017 13:53:41 -0400 Subject: UDP TLS Termination In-Reply-To: <20170328092827.GA32543@vlpc.nginx.com> References: <20170328092827.GA32543@vlpc.nginx.com> Message-ID: <0ef6a49c56d895e1c7dbe3f802817eec.NginxMailingListEnglish@forum.nginx.org> Vladimir Homutov Wrote: ------------------------------------------------------- > On Tue, Mar 28, 2017 at 12:25:35PM +0300, Vladimir Homutov wrote: > > instead of normal DTLS. > > i meant SSL (TLS) of course. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi, I stumbled across this thread in search of answers to my own question regarding the combination of nginx + DTLS. Since you didn't receive an answer to your question, Vladimir, here's a use case I am currently working on: I have an IoT use case using CoAP for client-server-communication. CoAP in turn uses DTLS for securing its data. All server applications are working behind an nginx web server. Right now for the DTLS communication nginx is justed proxiing the udp packets from the client to the server. When using a PKI instead of a, let's say PSK ciphersuite, I too would think that it would be be helpful to centralize all TLS specifics e.g. certificate management within the nginx web server. You should then be able to pass the unencrypted datagrams to the CoAP server. Regards, Sebastian Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273251,273571#msg-273571 From igal at lucee.org Wed Apr 12 18:40:16 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Wed, 12 Apr 2017 11:40:16 -0700 Subject: HTTP/2 on the Upstream Message-ID: According to https://www.nginx.com/blog/http2-module-nginx/#QandA nginx only supports HTTP/2 on the client side, but it is possible to configure proxy_pass to use HTTP/2. There is a huge benefit in supporting HTTP/2 on the Upstream, as that will allow the Upstream servers to perform HTTP/2 Push (https://en.wikipedia.org/wiki/HTTP/2_Server_Push). While nginx can not know which resources should be pushed on a dynamic page, as dynamic pages can not be simply cached across different users, the Upstream servers can know which resources should be pushed. I really think that nginx should reconsider its position on this matter. In the meantime, where can I find documentation on how to configure proxy_pass to use HTTP/2? Thank you, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Apr 12 19:53:43 2017 From: nginx-forum at forum.nginx.org (SebK) Date: Wed, 12 Apr 2017 15:53:43 -0400 Subject: Proxying UDP: Preserve proxy port during DTLS handshake Message-ID: Hello everyone, TL;DR: When proxying UDP packets through nginx, is there a way for nginx to preserve its initial source port for subsequent packets? This is to be used during a DTLS handshake. Outlined version: This issue arose when proxying UDP packets, more specifically establishing an DTLS connection for CoAP message exchange. I came across two different threads with similar subjects (https://forum.nginx.org/read.php?2,273251,273251#msg-273251 and https://forum.nginx.org/read.php?2,271957,271957#msg-271957) from which I can guess that it is not (yet) supported out of the box. Hence, I am only using nginx to proxy the CoAP's UDP packets between client and server. This works for unencrypted CoAP, but not for CoAP over DTLS because the handshake fails. This is because nginx uses (or may use?) different source ports for every udp packet it forwards. An easy way to examine this issue is to proxy a UDP netcat connection with nginx. First message from client to server is received but subsequent messages can only be sent from server to client because netcat "locks in" on the client port from which it received the first message. The port is constant on the client side, but nginx may use different ports when proxying the packets. I managed to get the DTLS connection to work by using the proxy_bind directive of the proxy stream module with the values "127.0.0.1:$remote_port" respectively "127.0.0.1:$server_port". It works, but I am not happy with either of it. Reason against "127.0.0.1:$server_port": Two client requests may overlap and there would be no way for the server to tell them apart. Reason against "127.0.0.1:$remote_port": Even tough unlikely, it may happen that two clients decide to use the same port from their dynamic port range. Also in this case there would be no way of telling them both apart. I know about the "proxy_bind address [transparent]" option of the proxy module, but I would consider using this the "nuclear option" since, according to the documentation, it requires the worker processes to run with superuser rights and reconfigure the kernel routing table. So my conclusive question is: Does nginx provide a way to preserve its chosen dynamic port when forwarding udp packets? Regards, Sebastian Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273576,273576#msg-273576 From vbart at nginx.com Wed Apr 12 19:57:16 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 12 Apr 2017 22:57:16 +0300 Subject: HTTP/2 on the Upstream In-Reply-To: References: Message-ID: <1507484.hQDnxRCjQ1@vbart-workstation> On Wednesday 12 April 2017 11:40:16 Igal @ Lucee.org wrote: > According to https://www.nginx.com/blog/http2-module-nginx/#QandA nginx > only supports HTTP/2 on the client side, but it is possible to configure > proxy_pass to use HTTP/2. > > There is a huge benefit in supporting HTTP/2 on the Upstream, as that > will allow the Upstream servers to perform HTTP/2 Push > (https://en.wikipedia.org/wiki/HTTP/2_Server_Push). [..] That's not related. Server Push doesn't require HTTP/2 from the Upstream side. Moreover, upstream usually don't have access to the static resources that are served directly from nginx. > > While nginx can not know which resources should be pushed on a dynamic > page, as dynamic pages can not be simply cached across different users, > the Upstream servers can know which resources should be pushed. > > I really think that nginx should reconsider its position on this matter. > > In the meantime, where can I find documentation on how to configure > proxy_pass to use HTTP/2? > There's no such documentation since HTTP/2 isn't supported by the proxy module. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Apr 12 19:59:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Apr 2017 22:59:47 +0300 Subject: HTTP/2 on the Upstream In-Reply-To: References: Message-ID: <20170412195947.GF13617@mdounin.ru> Hello! On Wed, Apr 12, 2017 at 11:40:16AM -0700, Igal @ Lucee.org wrote: > According to https://www.nginx.com/blog/http2-module-nginx/#QandA nginx > only supports HTTP/2 on the client side, but it is possible to configure > proxy_pass to use HTTP/2. > > There is a huge benefit in supporting HTTP/2 on the Upstream, as that > will allow the Upstream servers to perform HTTP/2 Push > (https://en.wikipedia.org/wiki/HTTP/2_Server_Push). > > While nginx can not know which resources should be pushed on a dynamic > page, as dynamic pages can not be simply cached across different users, > the Upstream servers can know which resources should be pushed. > > I really think that nginx should reconsider its position on this matter. There is nothing to stop a backend from performing a push based on the knowledge. It's just a matter of providing upstream servers with a way to push resources, e.g., via something like the X-Accel-Push header or something like this. This doesn't need HTTP/2 between nginx and upstream servers. Moreover, using HTTP/2 with ability to push things will likely be a problem here, as in most cases dynamic pages are generated separately from static assets. Using nginx to do the actual push is likely to be much more optimal here. > In the meantime, where can I find documentation on how to configure > proxy_pass to use HTTP/2? You can't, nginx only supports HTTP/2 on the client side. The actual answer was "Now you can't configure HTTP/2 with proxy_pass...", see here: https://youtu.be/4OiyssTW4BA?t=14m34s The transcript needs to be fixed, thanks. -- Maxim Dounin http://nginx.org/ From igal at lucee.org Wed Apr 12 21:13:45 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Wed, 12 Apr 2017 14:13:45 -0700 Subject: HTTP/2 on the Upstream In-Reply-To: <20170412195947.GF13617@mdounin.ru> References: <20170412195947.GF13617@mdounin.ru> Message-ID: <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> Thank you, Maxim and Valentin, for your prompt replies. I will reply here to both so that we can maintain a single thread for this issue: On 4/12/2017 12:57 PM, Valentin V. Bartenev wrote: > Server Push doesn't require HTTP/2 from the Upstream side. Moreover, > upstream usually don't have access to the static resources that are > served directly from nginx. On 4/12/2017 12:59 PM, Maxim Dounin wrote: > There is nothing to stop a backend from performing a push based on > the knowledge. It's just a matter of providing upstream servers > with a way to push resources, e.g., via something like the > X-Accel-Push header or something like this. This doesn't need > HTTP/2 between nginx and upstream servers. > > Moreover, using HTTP/2 with ability to push things will likely be > a problem here, as in most cases dynamic pages are generated > separately from static assets. Using nginx to do the actual > push is likely to be much more optimal here. These are both good points, but it is my understanding that the server that Pushes the resources needs to know which resources need to be pushed, is that not the case? If my Upstream (Tomcat, for example) generates a dynamic page for the client, then it can keep track of all of the images on that page and then push them. How can the Upstream "tell" nginx what to Push? Please watch the clip at https://youtu.be/QpLtBftqM04?t=34m51s until about 36m12s where Simone Bordet, a Jetty developer, claims that HA Proxy is a better proxy solution than nginx because it talks HTTP/2 to the Upstream. Also please note that the Servlet 4.0 specification plans to add a Push() API to push resources to the clients. If nginx doesn't use HTTP/2 to talk to my Servlet engine (Tomcat, Jetty, etc.), this functionality will not be available as the Servlet engine will treat the request as an HTTP/1. On 4/12/2017 12:59 PM, Maxim Dounin wrote: > The actual answer was "Now you can't configure HTTP/2 with > proxy_pass...", see here: > > https://youtu.be/4OiyssTW4BA?t=14m34s > > The transcript needs to be fixed, thanks. Well noted, thanks for clarifying. Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Apr 12 21:44:50 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Apr 2017 22:44:50 +0100 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: References: <20170411114826.GN3428@daoine.org> Message-ID: <20170412214450.GQ3428@daoine.org> On Wed, Apr 12, 2017 at 05:50:56AM -0400, zaidahmd wrote: Hi there, > Below I will explain my application with NGINX configuration I have > performed and the code snippets/references for future users. I've read your description, and I confess I'm not sure what benefit auth_request within nginx gives you. It looks like your application is doing its own auth check on every request anyway, so having nginx do the same thing seems redundant. I'm probably missing something. That's ok; I don't need to understand it. > Secondly, I am able to complete my configuraiton successfully and got it > working and now in testing and expanding the functionality. You now have a working system; that's good. The bonus piece would be if you found that some words in some documentation were unclear; and if you now know what words would have made it clear to you the first time you read it; then sharing those new words might let that piece of documentation become updated so that it will be clear for the next person reading it for the first time. Doing the work to prepare those words will be of no benefit to you, of course, so it's not a problem if it does not suit you to do it. It's good that you have solved the problem you had. Cheers, f -- Francis Daly francis at daoine.org From gfrankliu at gmail.com Wed Apr 12 21:50:08 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 12 Apr 2017 14:50:08 -0700 Subject: weight and balancing in upstream proxy Message-ID: Hi, How does nginx balances traffic to upstream with different weight? If I have 3 servers in upstream, with weight 1, 2, 4, assuming all are healthy, will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, 2, 3, 3, 3, 3? If I have two servers with both weight 50, will nginx will 50 requests to server 1, and then 50 to server 2, or will it calculate the ration to be 1:1 and send one after another? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Apr 12 21:52:49 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 12 Apr 2017 22:52:49 +0100 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: <20170412215249.GR3428@daoine.org> On Wed, Apr 12, 2017 at 06:13:19PM +0530, Ajay Garg wrote: Hi there, > We are facing the following issue : > > Cross-Origin Request Blocked: The Same Origin Policy disallows reading the > remote resource at https://1.2.3.4/. (Reason: CORS header 'Access-Control- > Allow-Origin' missing). What's the issue, specifically? It looks like your browser thinks it is talking to two web servers. Do you think your browser is talking to two web servers? If not, that's the problem to fix. Otherwise, you'll want to set suitable headers in the response from the first web server. If your browser should only be talking to https://1.2.3.4/, and everything else should be reverse-proxied behind that, then the problem is that some part of a back-end is leaking through, and the network allows the browser to talk directly to something that it should not be talking to. A later mail shows some nginx config, but it is not clear to me if that is on the 1.2.3.4 server or on a different server; and it is not clear to me why many of the add_header and proxy_set_header lines are there. I suspect that if you can get a clear understanding of the issue, and of what should be happening, then the path to configuring things to allow to all to happen will become clearer. Good luck with it, f -- Francis Daly francis at daoine.org From luky-37 at hotmail.com Wed Apr 12 23:11:23 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 12 Apr 2017 23:11:23 +0000 Subject: AW: HTTP/2 on the Upstream In-Reply-To: <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> References: <20170412195947.GF13617@mdounin.ru>, <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> Message-ID: > Please watch the clip at https://youtu.be/QpLtBftqM04?t=34m51s until > about 36m12s where Simone Bordet, a Jetty developer, claims that > HA Proxy is a better proxy solution than nginx because it talks > HTTP/2 to the Upstream. This statement is misleading. As of now, haproxy does not support HTTP/2. What you can do with haproxy is forward arbitrary TCP protocols, TLS offload and NPN/ALPN negotiate whatever you like with the client. This makes it possible to ALPN-negotiate H2 and forward whatever cleartext TCP protocol you like. This includes HTTP/2. Saying "Haproxy can talk HTTP/2 to the backend" is certainly wrong in the context you are interpreting it. The only thing missing to achieve the same thing in nginx, is afaik arbitrary ALPN protocol negotiation with the client int the ngx_stream_ssl_module. But that also does not make nginx HTTP/2 capable on the backend side. Lukas From igal at lucee.org Thu Apr 13 00:08:53 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Wed, 12 Apr 2017 17:08:53 -0700 Subject: HTTP/2 on the Upstream In-Reply-To: <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> References: <20170412195947.GF13617@mdounin.ru> <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> Message-ID: Hello, On 4/12/2017 12:59 PM, Maxim Dounin wrote: > It's just a matter of providing upstream servers > with a way to push resources, e.g., via something like the > X-Accel-Push header or something like this. This doesn't need > HTTP/2 between nginx and upstream servers. On 4/12/2017 2:13 PM, Igal @ Lucee.org wrote: > > If my Upstream (Tomcat, for example) generates a dynamic page for the > client, then it can keep track of all of the images on that page and > then push them. How can the Upstream "tell" nginx what to Push? > Upon studying HTTP/2 Push further I have learned that the way to do so is with the "Link" header, e.g. Link: ; rel=preload; as=style, ; rel=preload; as=script Chrome developer tools confirms that these assets are being pushed, so it's all good. You can ignore my previous email if you read this one in time. Thank you, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Thu Apr 13 01:39:29 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Thu, 13 Apr 2017 07:09:29 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Upgraded to 1.11 Now, things get worse, I am not being prompted for any credentials (even with all browser cache cleared), even with the following /etc/nginx/conf.d/default.conf ########################################################## server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; # add_header 'Access-Control-Max-Age' 1728000 'always'; # add_header 'Access-Control-Allow-Origin' $http_origin 'always'; # add_header 'Access-Control-Allow-Credentials' 'true' 'always'; # add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' 'always'; # add_header 'Access-Control-Allow-Headers' 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' 'always'; location / { # add_header 'Access-Control-Max-Age' 1728000 'always'; # add_header 'Access-Control-Allow-Origin' '*' 'always'; # add_header 'Access-Control-Allow-Credentials' 'true' 'always'; # add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' 'always'; # add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' 'always'; auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; # proxy_set_header 'Access-Control-Max-Age' 1728000; # proxy_set_header 'Access-Control-Allow-Origin' '*'; # proxy_set_header 'Access-Control-Allow-Credentials' 'true'; # proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; # proxy_set_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; proxy_pass $forwarded_protocol://127.0.0.1: $forwarded_port; } } ########################################################## Any ideas why this regression? On Wed, Apr 12, 2017 at 10:54 PM, Ajay Garg wrote: > Hi Richard. > > Thanks for the help. > > I added 'always' as the last argument in all the "add_header" and > "proxy_set_header" directives. > Unfortunately, I receive the following on the very first "add_header" > directive :: > > ##################################################### > 2017/04/12 17:18:22 [emerg] 28540#0: invalid number of arguments in > "add_header" directive in /etc/nginx/sites-enabled/default:22 > ##################################################### > > > I guess the 'always' argument requires nginx >= 1.7.5. > > > Is there a pre-built package available for nginx? > Our linux-machine is :: > > ##################################################### > uname -a > Linux proxy 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux > ##################################################### > > If not, I guess the link to use is http://nginx.org/en/docs/configure.html, > but I am very afraid that I might miss something, so a pre-built package >= > 1.7.5 (provided one exists) for our linux-machine would be great :) > > > Thanks for the help so far !!! > > > Thanks and Regards, > Ajay > > On Wed, Apr 12, 2017 at 8:30 PM, Richard Stanway < > r1ch+nginx at teamliquid.net> wrote: > >> Your are using auth_basic, so the 401 response code is not in the range >> that add_header works with ("Adds the specified field to a response header >> provided that the response code equals 200, 201, 204, 206, 301, 302, 303, >> 304, or 307."). You need to use "always" if you want to include the header >> in all responses. See the documentation for more details. >> >> http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header >> >> On Wed, Apr 12, 2017 at 4:48 PM, Ajay Garg >> wrote: >> >>> For the record, here is the server-block :: >>> >>> >>> ######################################################### >>> server { >>> >>> listen 443 ssl; >>> >>> ssl_certificate /etc/nginx/ssl/nginx.crt; >>> ssl_certificate_key /etc/nginx/ssl/nginx.key; >>> >>> add_header 'Access-Control-Max-Age' 1728000; >>> add_header 'Access-Control-Allow-Origin' $http_origin; >>> add_header 'Access-Control-Allow-Credentials' 'true'; >>> add_header 'Access-Control-Allow-Methods' 'GET, POST, >>> OPTIONS'; >>> add_header 'Access-Control-Allow-Headers' >>> 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,U >>> ser-Agent,X-Requested-With,If-Modified-Since,Cache-Control, >>> Content-Type'; >>> >>> location / { >>> >>> add_header 'Access-Control-Max-Age' 1728000; >>> add_header 'Access-Control-Allow-Origin' '*'; >>> add_header 'Access-Control-Allow-Credentials' >>> 'true'; >>> add_header 'Access-Control-Allow-Methods' 'GET, >>> POST, OPTIONS'; >>> add_header 'Access-Control-Allow-Headers' >>> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,I >>> f-Modified-Since,Cache-Control,Content-Type'; >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file /etc/nginx/ssl/.htpasswd; >>> >>> proxy_set_header 'Access-Control-Max-Age' >>> 1728000; >>> proxy_set_header 'Access-Control-Allow-Origin' >>> '*'; >>> proxy_set_header 'Access-Control-Allow-Credentials' >>> 'true'; >>> proxy_set_header 'Access-Control-Allow-Methods' >>> 'GET, POST, OPTIONS'; >>> proxy_set_header 'Access-Control-Allow-Headers' >>> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,I >>> f-Modified-Since,Cache-Control,Content-Type'; >>> >>> proxy_pass $forwarded_protocol://127.0.0. >>> 1:$forwarded_port; >>> >>> } >>> } >>> ######################################################### >>> >>> On Wed, Apr 12, 2017 at 6:13 PM, Ajay Garg >>> wrote: >>> >>>> Hi All. >>>> >>>> We are facing the following issue : >>>> >>>> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >>>> the remote resource at https://1.2.3.4/. (Reason: CORS header >>>> 'Access-Control- >>>> Allow-Origin' missing). >>>> >>>> Have tried everything I could find on the google, but nothing works >>>> (whatever I do in /etc/nginx/sites-available/default) >>>> >>>> >>>> So, first question first, is it even possible to solve this issue on >>>> the version, as per the information below :: >>>> >>>> ######################################################## >>>> nginx -V >>>> nginx version: nginx/1.4.6 (Ubuntu) >>>> built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) >>>> TLS SNI support enabled >>>> configure arguments: --with-cc-opt='-g -O2 -fstack-protector >>>> --param=ssp-buffer-size=4 -Wformat -Werror=format-security >>>> -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions >>>> -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf >>>> --http-log-path=/var/log/nginx/access.log >>>> --error-log-path=/var/log/nginx/error.log >>>> --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid >>>> --http-client-body-temp-path=/var/lib/nginx/body >>>> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi >>>> --http-proxy-temp-path=/var/lib/nginx/proxy >>>> --http-scgi-temp-path=/var/lib/nginx/scgi >>>> --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug >>>> --with-pcre-jit --with-ipv6 --with-http_ssl_module >>>> --with-http_stub_status_module --with-http_realip_module >>>> --with-http_addition_module --with-http_dav_module --with-http_flv_module >>>> --with-http_geoip_module --with-http_gzip_static_module >>>> --with-http_image_filter_module --with-http_mp4_module >>>> --with-http_perl_module --with-http_random_index_module >>>> --with-http_secure_link_module --with-http_spdy_module >>>> --with-http_sub_module --with-http_xslt_module --with-mail >>>> --with-mail_ssl_module --add-module=/build/nginx-9sG_ >>>> hy/nginx-1.4.6/debian/modules/headers-more-nginx-module >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-auth-pam >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-cache-purge >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-dav-ext-module >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-development-kit >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-echo >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ngx-fancyindex >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-http-push >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-lua >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upload-progress >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/nginx-upstream-fair >>>> --add-module=/build/nginx-9sG_hy/nginx-1.4.6/debian/modules/ >>>> ngx_http_substitutions_filter_module >>>> ########################################################## >>>> >>>> >>>> >>>> Thanks and Regards, >>>> Ajay >>>> >>> >>> >>> >>> -- >>> Regards, >>> Ajay >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 13 05:00:20 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Thu, 13 Apr 2017 01:00:20 -0400 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <20170412214450.GQ3428@daoine.org> References: <20170412214450.GQ3428@daoine.org> Message-ID: <5d1cebcbf1c93c193ff939a7bb80d8f7.NginxMailingListEnglish@forum.nginx.org> Hi Francis, > I've read your description, and I confess I'm not sure what benefit > auth_request within nginx gives you. It looks like your application is > doing its own auth check on every request anyway, so having nginx do > the same thing seems redundant. I'm probably missing something. can u please tell me the concept and usage of auth_request module. What I understood from the documentation is stated below."http://nginx.org/en/docs/http/ngx_http_auth_request_module.html" Auth_Request Understanding: If the auth requirement of an application is to use something other than BASIC or jwt THEN use auth_request. auth_request can be used to send each request to a custom authentication application and get the requests authenticated from that application. NGINX will send a subrequest to auth_request application for each incoming client request. (This is the same logic which MAXIM told me in the previous replies. "https://forum.nginx.org/read.php?2,273515,273516#msg-273516") ************************** MAXIM's Reply Below **************************************** > You misunderstood what auth_request does. Instead, it issues a > subrequest for every incoming request, and allows further > processing of the request if and only if the subrequest returns > 200. No attempts are made to look into the response returned for > the original request, that is, "protected application" ************************** END MAXIM's Reply**************************************** Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273515,273586#msg-273586 From reallfqq-nginx at yahoo.fr Thu Apr 13 07:54:34 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 13 Apr 2017 09:54:34 +0200 Subject: Windows 1024 Connections Limit In-Reply-To: <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> <20170412125716.GP13617@mdounin.ru> <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> Message-ID: Even though using nginx on Windows goes way over my head (even for development) and/or seing WIndows as any kind of server, I read that Windows Vista+ support the poll (well, actually WSAPoll ) system call. Since XP may now reasonably be called something of the past, would not it be nice to go for it? No epoll, though. --- *B. R.* On Wed, Apr 12, 2017 at 7:42 PM, Igal @ Lucee.org wrote: > Maxim, > > On 4/12/2017 5:57 AM, Maxim Dounin wrote: > >> >> On Windows, nginx uses select() system call to handle connection >> events. This syscall implies fixed-size bitmasks to pass file >> descriptors from userland to kernel and back. Size of these >> bitmasks can be only specified during compilation, and 1024 is the >> value nginx uses for official binaries to balance between maximum >> number of connections and unneeded overhead implied by large >> bitmasks. >> >> It is possible to recompile nginx with different value if you need >> to, see http://nginx.org/en/docs/howto_build_on_win32.html. >> >> On ther other hand, if you are using nginx in production I would >> recommend to consider using Unix variants instead. >> > > Thank you very much for the explanation and recommendation, > > > Igal > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Thu Apr 13 08:09:04 2017 From: oscaretu at gmail.com (oscaretu .) Date: Thu, 13 Apr 2017 08:09:04 +0000 Subject: Windows 1024 Connections Limit In-Reply-To: References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> <20170412125716.GP13617@mdounin.ru> <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> Message-ID: Since two days ago Windows Vista is part of the past: http://www.theverge.com/2017/4/11/15241580/microsoft-windows-vista-end-of-support Kind regards, Oscar El El jue, 13 abr 2017 a las 9:55, B.R. via nginx escribi?: > Even though using nginx on Windows goes way over my head (even for > development) and/or seing WIndows as any kind of server, I read that > Windows Vista+ support the poll (well, actually WSAPoll > ) > system call. > Since XP may now reasonably be called something of the past, would not it > be nice to go for it? No epoll, though. > --- > *B. R.* > > On Wed, Apr 12, 2017 at 7:42 PM, Igal @ Lucee.org wrote: > >> Maxim, >> >> On 4/12/2017 5:57 AM, Maxim Dounin wrote: >> >>> >>> On Windows, nginx uses select() system call to handle connection >>> events. This syscall implies fixed-size bitmasks to pass file >>> descriptors from userland to kernel and back. Size of these >>> bitmasks can be only specified during compilation, and 1024 is the >>> value nginx uses for official binaries to balance between maximum >>> number of connections and unneeded overhead implied by large >>> bitmasks. >>> >>> It is possible to recompile nginx with different value if you need >>> to, see http://nginx.org/en/docs/howto_build_on_win32.html. >>> >>> On ther other hand, if you are using nginx in production I would >>> recommend to consider using Unix variants instead. >>> >> >> Thank you very much for the explanation and recommendation, >> >> >> Igal >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Apr 13 08:09:16 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 13 Apr 2017 10:09:16 +0200 Subject: weight and balancing in upstream proxy In-Reply-To: References: Message-ID: That is an interesting questions as intuitively, people could think the former behavior applies. If I got the source code right, and as the docs state, nginx is following a weighted round-robin algorithm. It thus means it will go over the same list of servers everytime a peer needs to be chosen (ie for every request), and pick the first not having depleted its weight allocation. To me, it would use the latter of your proposals. ?Please correct me if I am wrong, so incorrect information does not propagate too much. :o)? --- *B. R.* On Wed, Apr 12, 2017 at 11:50 PM, Frank Liu wrote: > Hi, > > How does nginx balances traffic to upstream with different weight? If I > have 3 servers in upstream, with weight 1, 2, 4, assuming all are healthy, > will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, 2, 3, 3, 3, > 3? If I have two servers with both weight 50, will nginx will 50 requests > to server 1, and then 50 to server 2, or will it calculate the ration to be > 1:1 and send one after another? > > Thanks! > Frank > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Thu Apr 13 09:28:58 2017 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 13 Apr 2017 12:28:58 +0300 Subject: Proxying UDP: Preserve proxy port during DTLS handshake In-Reply-To: References: Message-ID: <20170413092857.GA16930@vlpc.nginx.com> On Wed, Apr 12, 2017 at 03:53:43PM -0400, SebK wrote: > Hello everyone, Hi Sebastian, [...] > > So my conclusive question is: Does nginx provide a way to preserve its > chosen dynamic port when forwarding udp packets? > No, currently it is not supported. See also http://mailman.nginx.org/pipermail/nginx/2016-September/051688.html We have some experimental patches for DTLS support that overcome this issue, and if you are interested in testing, please write me a email (vl at nginx.com) From nginx-forum at forum.nginx.org Thu Apr 13 09:40:46 2017 From: nginx-forum at forum.nginx.org (comput3rz) Date: Thu, 13 Apr 2017 05:40:46 -0400 Subject: [ Module development - Socket ] In-Reply-To: <20170411141016.GK13617@mdounin.ru> References: <20170411141016.GK13617@mdounin.ru> Message-ID: <40884f11b71ff0235a02d1103bd0dd4d.NginxMailingListEnglish@forum.nginx.org> After reading the source code available in "ngx_mail_auth_http_module.c" trying to figure out how to make a "basic TCP connection" to a backend server, nothing seems clear. I didn't find any simple pattern to read/write on a TCP socket, even if there are simple call like ngx_event_connect_peer() or ngx_close_connection() for connection phase and ngx_send|recv / ngx_unix_send|recv / send|recv for reading and writing on a socket. How can I setup a basic TCP socket into my handler which will connect / send / recv and then close the connection ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273525,273598#msg-273598 From nginx-forum at forum.nginx.org Thu Apr 13 11:16:34 2017 From: nginx-forum at forum.nginx.org (SebK) Date: Thu, 13 Apr 2017 07:16:34 -0400 Subject: Proxying UDP: Preserve proxy port during DTLS handshake In-Reply-To: <20170413092857.GA16930@vlpc.nginx.com> References: <20170413092857.GA16930@vlpc.nginx.com> Message-ID: <5eacd2daaf4799c03ded15c0d1a71bab.NginxMailingListEnglish@forum.nginx.org> Vladimir Homutov Wrote: ------------------------------------------------------- > On Wed, Apr 12, 2017 at 03:53:43PM -0400, SebK wrote: > > Hello everyone, > > Hi Sebastian, > > [...] > > > > So my conclusive question is: Does nginx provide a way to preserve > its > > chosen dynamic port when forwarding udp packets? > > > > No, currently it is not supported. > See also > http://mailman.nginx.org/pipermail/nginx/2016-September/051688.html > > We have some experimental patches for DTLS support that overcome this > issue, and if you are interested in testing, please write me a email > (vl at nginx.com) > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Vladimir, much appreciated. I'll contact you via email. Regards, Sebastian Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273576,273600#msg-273600 From igal at lucee.org Thu Apr 13 14:00:04 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 13 Apr 2017 07:00:04 -0700 Subject: Windows 1024 Connections Limit In-Reply-To: References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> <20170412125716.GP13617@mdounin.ru> <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> Message-ID: <6dd2840b-6ef2-ee9a-4328-ee91913db512@lucee.org> Hi, On 4/13/2017 12:54 AM, B.R. via nginx wrote: > Even though using nginx on Windows goes way over my head (even for > development) and/or seing WIndows as any kind of server, It's very simple - this system was set up many years ago and I was uncomfortable at the time with running Linux -- not with the stability of Linux at the time and not with my skills administering it. The next server that I will set up for production will be a Linux machine, but until then I'm stuck with Windows. I was using IIS as a web server for many years, until about five years ago I "saw the light" and upgraded to nginx. Having nginx available for Windows is a great thing because it serves as a stepping stone and allows Windows users to make smaller changes to their systems rather than one big change. > I read that Windows Vista+ support the poll (well, actually WSAPoll > ) > system call. > Since XP may now reasonably be called something of the past, would not > it be nice to go for it? No epoll, though. I agree that XP is out of the picture now, and the servers that I need to support are 2008R2 which are built on the Vista code base so this system call is supported. Can you explain briefly how WSAPoll would help here? Thanks, Igal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Apr 13 14:26:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Apr 2017 17:26:47 +0300 Subject: Windows 1024 Connections Limit In-Reply-To: References: <4a2fed45-af88-e2b8-1e7a-ebac7d3864ad@lucee.org> <20170412125716.GP13617@mdounin.ru> <06a74863-9588-ec68-8492-0e1a8ff2afe3@lucee.org> Message-ID: <20170413142647.GG13617@mdounin.ru> Hello! On Thu, Apr 13, 2017 at 09:54:34AM +0200, B.R. via nginx wrote: > Even though using nginx on Windows goes way over my head (even for > development) and/or seing WIndows as any kind of server, I read that > Windows Vista+ support the poll (well, actually WSAPoll > ) > system call. > Since XP may now reasonably be called something of the past, would not it > be nice to go for it? No epoll, though. I've looked into WSAPoll() support a while ago. Unfortunately, XP is still widely used in real life, over 5% worldwide: http://gs.statcounter.com/os-version-market-share/windows/desktop/worldwide This is likely to become an option in the not-so-distant future though. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Apr 13 14:34:54 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Apr 2017 17:34:54 +0300 Subject: weight and balancing in upstream proxy In-Reply-To: References: Message-ID: <20170413143454.GH13617@mdounin.ru> Hello! On Thu, Apr 13, 2017 at 10:09:16AM +0200, B.R. via nginx wrote: > That is an interesting questions as intuitively, people could think the > former behavior applies. > > If I got the source code > > right, and as the docs > > state, nginx is following a weighted round-robin > algorithm. > It thus means it will go over the same list of servers everytime a peer > needs to be chosen (ie for every request), and pick the first not having > depleted its weight allocation. > > To me, it would use the latter of your proposals. > ?Please correct me if I am wrong, so incorrect information does not > propagate too much. :o)? The Wikipedia link in question doesn't seem to be related to what nginx does. -- Maxim Dounin http://nginx.org/ From ajaygargnsit at gmail.com Thu Apr 13 14:50:15 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Thu, 13 Apr 2017 20:20:15 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Strange, but rebooting the machine caused the credentials-popup to be seen again :-| Sorry for the noise here. There has been some progress, but still get a "CORS preflight did not succeed error". Following is what I am doing. a) Following is the server-block in /etc/nginx/conf.d/default.conf :: ########################################################################## server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; add_header 'Access-Control-Max-Age' 1728000 'always'; add_header 'Access-Control-Allow-Origin' $http_origin 'always'; add_header 'Access-Control-Allow-Credentials' 'true' 'always'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' 'always'; add_header 'Access-Control-Allow-Headers' 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' 'always'; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; proxy_set_header 'Access-Control-Max-Age' 1728000; proxy_set_header 'Access-Control-Allow-Origin' '*'; proxy_set_header 'Access-Control-Allow-Credentials' 'true'; proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; proxy_set_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; proxy_pass $forwarded_protocol://127.0.0.1:$forwarded_port; } } ########################################################################## b) Firing the following html from firefox (sensitive information changed) :: ########################################################################## ########################################################################## Following is received in the firebug-console (sensitive information changed) :: ########################################################################## GET https://23.253.207.208/ uff.html (line 19) Headers Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate, br Accept-Language en-US,en;q=0.5 Authorization Basic abcdefg Cache-Control no-cache Host 1.2.3.4 Origin null User-Agent Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0 Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://1.2.3.4/. (Reason: CORS preflight channel did not succeed). ########################################################################## I am beginning to believe that I am close to solving the issue (of course all credit to tremendous help from this list). I will be grateful for the last bit of help being received by the really helpful experts here.. Sorry again for the noise in my previous email. Thanks and Regards, Ajay From mdounin at mdounin.ru Thu Apr 13 15:07:37 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Apr 2017 18:07:37 +0300 Subject: HTTP/2 on the Upstream In-Reply-To: References: <20170412195947.GF13617@mdounin.ru> <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> Message-ID: <20170413150737.GJ13617@mdounin.ru> Hello! On Wed, Apr 12, 2017 at 05:08:53PM -0700, Igal @ Lucee.org wrote: > On 4/12/2017 12:59 PM, Maxim Dounin wrote: > > It's just a matter of providing upstream servers > > with a way to push resources, e.g., via something like the > > X-Accel-Push header or something like this. This doesn't need > > HTTP/2 between nginx and upstream servers. > > On 4/12/2017 2:13 PM, Igal @ Lucee.org wrote: > > > > If my Upstream (Tomcat, for example) generates a dynamic page for the > > client, then it can keep track of all of the images on that page and > > then push them. How can the Upstream "tell" nginx what to Push? > > > > Upon studying HTTP/2 Push further I have learned that the way to do so > is with the "Link" header, e.g. > > Link: ; rel=preload; as=style, ; > rel=preload; as=script > > Chrome developer tools confirms that these assets are being pushed, so > it's all good. Note: there is no HTTP/2 push support in nginx as of now. If you indeed see the resources being pushed with the "Link" header, likely you are using cloudflare and this is something Cloudflare does for you. Note well that pushing all of the resources used on the page is very likely to do more harm than good, since in many cases browsers already have static resources cached. -- Maxim Dounin http://nginx.org/ From igal at lucee.org Thu Apr 13 15:16:22 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Thu, 13 Apr 2017 08:16:22 -0700 Subject: HTTP/2 on the Upstream In-Reply-To: <20170413150737.GJ13617@mdounin.ru> References: <20170412195947.GF13617@mdounin.ru> <7c70d560-3b3f-5e6a-1475-852de3ea25f8@lucee.org> <20170413150737.GJ13617@mdounin.ru> Message-ID: <7c37ad41-3495-e189-bf29-ce89ad8bf585@lucee.org> Maxim, On 4/13/2017 8:07 AM, Maxim Dounin wrote: > Note: there is no HTTP/2 push support in nginx as of now. If you > indeed see the resources being pushed with the "Link" header, > likely you are using cloudflare and this is something Cloudflare > does for you. Thank you for clarifying. That did puzzle me a bit and I thought that I simply misunderstood the Push process. What I see in Developer Tools is that the resources that I reference in the "Link" header are downloaded first, and are downloaded faster. The "Initiator" of those resources shows as "Other", as opposed to most other resources who show something like a URI. > Note well that pushing all of the resources used on the page > is very likely to do more harm than good, since in many cases > browsers already have static resources cached. Understood. I am only "pushing" (I guess not a real push, but a suggestion to the browser via the "Link" header) links to a few resources that are required for loading and rendering the page, like the stylesheet, javascript, and image sprite. Thanks, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Thu Apr 13 17:37:01 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 13 Apr 2017 19:37:01 +0200 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: You're missing the "Authorization" header in your Access-Control-Allow-Headers directive. You can alternatively pass the basic auth in your URI, eg xhr.open("GET", " https://username:password at 1.2.3.4/") rather than crafting it manually. On Thu, Apr 13, 2017 at 4:50 PM, Ajay Garg wrote: > Strange, but rebooting the machine caused the credentials-popup to be > seen again :-| > Sorry for the noise here. > > There has been some progress, but still get a "CORS preflight did not > succeed error". > Following is what I am doing. > > > a) > Following is the server-block in /etc/nginx/conf.d/default.conf :: > > ########################################################################## > server { > > listen 443 ssl; > > ssl_certificate /etc/nginx/ssl/nginx.crt; > ssl_certificate_key /etc/nginx/ssl/nginx.key; > > add_header 'Access-Control-Max-Age' 1728000 'always'; > add_header 'Access-Control-Allow-Origin' $http_origin > 'always'; > add_header 'Access-Control-Allow-Credentials' 'true' > 'always'; > add_header 'Access-Control-Allow-Methods' 'GET, POST, > OPTIONS' 'always'; > add_header 'Access-Control-Allow-Headers' > 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep- > Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache- > Control,Content-Type' > 'always'; > > location / { > > auth_basic 'Restricted'; > auth_basic_user_file /etc/nginx/ssl/.htpasswd; > > proxy_set_header 'Access-Control-Max-Age' 1728000; > proxy_set_header 'Access-Control-Allow-Origin' '*'; > proxy_set_header > 'Access-Control-Allow-Credentials' 'true'; > proxy_set_header > 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; > proxy_set_header > 'Access-Control-Allow-Headers' > 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- > With,If-Modified-Since,Cache-Control,Content-Type'; > > proxy_pass > $forwarded_protocol://127.0.0.1:$forwarded_port; > > } > } > ########################################################################## > > > > > b) > Firing the following html from firefox (sensitive information changed) :: > > ########################################################################## > > > > > > ########################################################################## > > > > Following is received in the firebug-console (sensitive information > changed) :: > > ########################################################################## > GET https://23.253.207.208/ > uff.html (line 19) > Headers > > Accept > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Encoding gzip, deflate, br > Accept-Language en-US,en;q=0.5 > Authorization Basic abcdefg > Cache-Control no-cache > Host 1.2.3.4 > Origin null > User-Agent Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:47.0) > Gecko/20100101 Firefox/47.0 > > > Cross-Origin Request Blocked: The Same Origin Policy disallows reading > the remote resource at https://1.2.3.4/. (Reason: CORS preflight > channel did not succeed). > ########################################################################## > > > I am beginning to believe that I am close to solving the issue (of > course all credit to tremendous help from this list). > I will be grateful for the last bit of help being received by the > really helpful experts here.. > > Sorry again for the noise in my previous email. > > > Thanks and Regards, > Ajay > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 13 19:54:53 2017 From: nginx-forum at forum.nginx.org (Unsay) Date: Thu, 13 Apr 2017 15:54:53 -0400 Subject: nginScript and accessing cookies In-Reply-To: References: Message-ID: Okay, I've found the answer to my question. nginScript supports access to all of NGINX's variables. This also provides access to the cookie like so: myfunction(req, res) { var cookie = req.variables.http_cookie; } Kind regards Andreas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273509,273626#msg-273626 From nginx-forum at forum.nginx.org Thu Apr 13 20:16:41 2017 From: nginx-forum at forum.nginx.org (Unsay) Date: Thu, 13 Apr 2017 16:16:41 -0400 Subject: nginScript filesystem access Message-ID: <8ee5b048352077554746b3ecd2915799.NginxMailingListEnglish@forum.nginx.org> Hello We'd like to move a rather complicated multilingual website over to nginx. I must determine if we can handle the somewhat involved language based redirects in nginScript middleware (javascript). At one point I must redirect based on whether or not a file exists. In the configuration file I could do this like so: if (-f $realpath_root$url) { ... } However, that doesn't help me since all the redirection logic is in a javascript function. How can I do that same test from within the javascript function rather than in the config file? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273628,273628#msg-273628 From nginx-forum at forum.nginx.org Thu Apr 13 21:26:35 2017 From: nginx-forum at forum.nginx.org (daveyfx) Date: Thu, 13 Apr 2017 17:26:35 -0400 Subject: auth_basic and satisfy allowing all traffic Message-ID: <60dae17a8cf35ed9361be4a76b93bd67.NginxMailingListEnglish@forum.nginx.org> Hi all - I'm having an issue trying to get auth_basic and satisfy directives working in tandem. If I use auth_basic/auth_basic_user_file on its own, I am prompted for credentials as expected. However, if I added the satisfy/allow/deny directives above, it seems that ALL traffic is allowed in without prompting for auth. Here's how I have it. satisfy any; allow 38.103.XX.XXX/32; # HQIP allow 38.118.XX.XXX/32; # User VPN IP deny all; auth_basic "Site Restricted"; auth_basic_user_file includes/htpasswd.site.dev.conf; When I look though my access logs, I see the correct client IP as well. nginx version is 1.10.1 Thank you for your help. Dave Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273629,273629#msg-273629 From gfrankliu at gmail.com Thu Apr 13 23:08:23 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 13 Apr 2017 16:08:23 -0700 Subject: weight and balancing in upstream proxy In-Reply-To: <20170413143454.GH13617@mdounin.ru> References: <20170413143454.GH13617@mdounin.ru> Message-ID: <646E45D3-15EE-4917-81AD-C182FA32D548@gmail.com> Hi Maxim, Thanks for pointing out the link is not related. Do you have the answer to the original question or a related link? Thanks Frank > On Apr 13, 2017, at 7:34 AM, Maxim Dounin wrote: > > Hello! > >> On Thu, Apr 13, 2017 at 10:09:16AM +0200, B.R. via nginx wrote: >> >> That is an interesting questions as intuitively, people could think the >> former behavior applies. >> >> If I got the source code >> >> right, and as the docs >> >> state, nginx is following a weighted round-robin >> algorithm. >> It thus means it will go over the same list of servers everytime a peer >> needs to be chosen (ie for every request), and pick the first not having >> depleted its weight allocation. >> >> To me, it would use the latter of your proposals. >> Please correct me if I am wrong, so incorrect information does not >> propagate too much. :o) > > The Wikipedia link in question doesn't seem to be related to what > nginx does. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu Apr 13 23:35:58 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 00:35:58 +0100 Subject: Nginx - API Gateway is not forwarding the request to Auth Service In-Reply-To: <5d1cebcbf1c93c193ff939a7bb80d8f7.NginxMailingListEnglish@forum.nginx.org> References: <20170412214450.GQ3428@daoine.org> <5d1cebcbf1c93c193ff939a7bb80d8f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170413233558.GA10157@daoine.org> On Thu, Apr 13, 2017 at 01:00:20AM -0400, zaidahmd wrote: Hi there, > > I've read your description, and I confess I'm not sure what benefit > > auth_request within nginx gives you. It looks like your application is > > doing its own auth check on every request anyway, so having nginx do > > the same thing seems redundant. I'm probably missing something. > > can u please tell me the concept and usage of auth_request module. What I My understanding is that the auth_request module is especially useful if you want some degree of authorization, but the application you want to protect does not implement it. > understood from the documentation is stated > below."http://nginx.org/en/docs/http/ngx_http_auth_request_module.html" For what it's worth, when I read the above url, the understanding I come to does not exactly match the paragraph you write. But that does not really matter: you have a configuration that does what you want it to do. So what you have is good. If you particularly want to test whether auth_request does anything useful in your case, use a test system with the same configuration, and remove the auth_request lines. If the effective behaviour changes, then the lines were doing something that matters. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Apr 13 23:49:07 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 00:49:07 +0100 Subject: weight and balancing in upstream proxy In-Reply-To: References: Message-ID: <20170413234907.GB10157@daoine.org> On Wed, Apr 12, 2017 at 02:50:08PM -0700, Frank Liu wrote: Hi there, > How does nginx balances traffic to upstream with different weight? If I > have 3 servers in upstream, with weight 1, 2, 4, assuming all are healthy, > will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, 2, 3, 3, 3, > 3? If you want to know what your current nginx version does, it should not be too difficult to test: One nginx.conf. One http section. One upstream{} listing multiple ip:ports. One server{} which proxy_pass:es to that upstream. Multiple server{}s, each of which listens on one ip:port, writes to a different access_log, and does something like "return 200 ok;". Then for increasing numbers X, "GET /X" on the main server. Look at the individual access log files to see which server handled /1, which handled /2, etc. By the time you get to 700, you'll either see that there is a reliably repeating pattern, or you'll see that it is probably randomish. If you actually care about what pattern is used; or if you want to guarantee that the same pattern will be used in future nginx versions; then get your preferred code written and use that instead of whatever nginx uses. If you want to know what guarantee there is that the behaviour will not change in the future: I'd say "none", except that there is a good chance that what is written in the documentation will be honoured. Paraphrasing that, for the above case: for 7 requests, 1 will go to the first server; 2 to the second; and 4 to the third. > If I have two servers with both weight 50, will nginx will 50 requests > to server 1, and then 50 to server 2, or will it calculate the ration to be > 1:1 and send one after another? Same answer: it does not seem to difficult to test, if you don't want to read the available source. Good luck with it! f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Apr 14 00:17:53 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 01:17:53 +0100 Subject: auth_basic and satisfy allowing all traffic In-Reply-To: <60dae17a8cf35ed9361be4a76b93bd67.NginxMailingListEnglish@forum.nginx.org> References: <60dae17a8cf35ed9361be4a76b93bd67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170414001753.GC10157@daoine.org> On Thu, Apr 13, 2017 at 05:26:35PM -0400, daveyfx wrote: Hi there, > However, if I added the > satisfy/allow/deny directives above, it seems that ALL traffic is allowed in > without prompting for auth. It works for me. Can you provide a complete config that shows the problem you report? What I have is: == server { listen 8080; satisfy any; allow 127.0.0.1/32; allow 127.0.0.2/32; deny all; auth_basic "Site Restricted"; auth_basic_user_file includes/htpasswd.site.dev.conf; } == Then "curl -i http://127.0.0.2:8080/x" returns 200 with the content of /usr/local/nginx/html/x, while "curl -i http://127.0.0.3:8080/x" returns 401 with WWW-Authenticate: Basic realm="Site Restricted" What do you see when you do that exact test? How does it differ from the problem case you reported? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Apr 14 00:47:55 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 01:47:55 +0100 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: <20170414004755.GD10157@daoine.org> On Thu, Apr 13, 2017 at 08:20:15PM +0530, Ajay Garg wrote: Hi there, > There has been some progress, but still get a "CORS preflight did not > succeed error". What do the nginx logs say happened? What should the nginx logs say, if everything worked the way you want it to? > Following is received in the firebug-console (sensitive information changed) :: > Host 1.2.3.4 > Origin null Does anything different happen if you serve this html file from your 1.2.3.4 server, instead of (I presume) by reading a local file? Will your final use case involve a local file, a resource from the 1.2.3.4 server, or something else? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Apr 14 03:49:50 2017 From: nginx-forum at forum.nginx.org (daveyfx) Date: Thu, 13 Apr 2017 23:49:50 -0400 Subject: auth_basic and satisfy allowing all traffic In-Reply-To: <20170414001753.GC10157@daoine.org> References: <20170414001753.GC10157@daoine.org> Message-ID: <593b50077c0b6f4db932cbd90e74e564.NginxMailingListEnglish@forum.nginx.org> Hi Francis - In both cases, I get a 404 response, which is to be expected as the default doc root for nginx isn't served on my host. I should expect a 401 on the second curl test, but I get a 404. HTTP/1.1 404 Not Found Server: nginx Date: Fri, 14 Apr 2017 03:44:19 GMT Content-Type: text/html Content-Length: 162 Connection: keep-alive Vary: Accept-Encoding 404 Not Found

404 Not Found


nginx
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273629,273636#msg-273636 From ajaygargnsit at gmail.com Fri Apr 14 04:47:15 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Fri, 14 Apr 2017 10:17:15 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Hi Richard. You have got me thinking ... https://username:password at 1.2.3.4/ works, even without ANY of the "add_header" and "proxy_set_header" directives. So, now the only thing that worries me is security. http://stackoverflow.com/questions/4143196/is-get-data-also-encrypted-in-https indicates that the URL is safe, in the sense that "username" and "password" would not be sniffable through a man-in-the-middle attack, right? Also, since 1.2.3.4 is our own server, so we are not really bothered about GET-requests getting logged on the server, so we should be good. Do I make sense? Kindly let know your thoughts. Thanks and Regards, Ajay On Thu, Apr 13, 2017 at 11:07 PM, Richard Stanway wrote: > You're missing the "Authorization" header in your Access-Control-Allow-Headers > directive. > > You can alternatively pass the basic auth in your URI, eg xhr.open("GET", " > https://username:password at 1.2.3.4/") rather than crafting it manually. > > On Thu, Apr 13, 2017 at 4:50 PM, Ajay Garg wrote: > >> Strange, but rebooting the machine caused the credentials-popup to be >> seen again :-| >> Sorry for the noise here. >> >> There has been some progress, but still get a "CORS preflight did not >> succeed error". >> Following is what I am doing. >> >> >> a) >> Following is the server-block in /etc/nginx/conf.d/default.conf :: >> >> ############################################################ >> ############## >> server { >> >> listen 443 ssl; >> >> ssl_certificate /etc/nginx/ssl/nginx.crt; >> ssl_certificate_key /etc/nginx/ssl/nginx.key; >> >> add_header 'Access-Control-Max-Age' 1728000 'always'; >> add_header 'Access-Control-Allow-Origin' $http_origin >> 'always'; >> add_header 'Access-Control-Allow-Credentials' 'true' >> 'always'; >> add_header 'Access-Control-Allow-Methods' 'GET, POST, >> OPTIONS' 'always'; >> add_header 'Access-Control-Allow-Headers' >> 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive, >> User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' >> 'always'; >> >> location / { >> >> auth_basic 'Restricted'; >> auth_basic_user_file /etc/nginx/ssl/.htpasswd; >> >> proxy_set_header 'Access-Control-Max-Age' 1728000; >> proxy_set_header 'Access-Control-Allow-Origin' >> '*'; >> proxy_set_header >> 'Access-Control-Allow-Credentials' 'true'; >> proxy_set_header >> 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; >> proxy_set_header >> 'Access-Control-Allow-Headers' >> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With, >> If-Modified-Since,Cache-Control,Content-Type'; >> >> proxy_pass >> $forwarded_protocol://127.0.0.1:$forwarded_port; >> >> } >> } >> ############################################################ >> ############## >> >> >> >> >> b) >> Firing the following html from firefox (sensitive information changed) :: >> >> ############################################################ >> ############## >> >> >> >> >> >> ############################################################ >> ############## >> >> >> >> Following is received in the firebug-console (sensitive information >> changed) :: >> >> ############################################################ >> ############## >> GET https://23.253.207.208/ >> uff.html (line 19) >> Headers >> >> Accept >> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 >> Accept-Encoding gzip, deflate, br >> Accept-Language en-US,en;q=0.5 >> Authorization Basic abcdefg >> Cache-Control no-cache >> Host 1.2.3.4 >> Origin null >> User-Agent Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:47.0) >> Gecko/20100101 Firefox/47.0 >> >> >> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >> the remote resource at https://1.2.3.4/. (Reason: CORS preflight >> channel did not succeed). >> ############################################################ >> ############## >> >> >> I am beginning to believe that I am close to solving the issue (of >> course all credit to tremendous help from this list). >> I will be grateful for the last bit of help being received by the >> really helpful experts here.. >> >> Sorry again for the noise in my previous email. >> >> >> Thanks and Regards, >> Ajay >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri Apr 14 05:08:21 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 13 Apr 2017 22:08:21 -0700 Subject: weight and balancing in upstream proxy In-Reply-To: <20170413234907.GB10157@daoine.org> References: <20170413234907.GB10157@daoine.org> Message-ID: Hi Francis, Thanks for confirming that there is no document, and any results observed through testing or reviewing code will not be guaranteed. I guess it is purposely undocumented so that people won't rely on one behavior and we are free to change. Regards, Frank On Thu, Apr 13, 2017 at 4:49 PM, Francis Daly wrote: > On Wed, Apr 12, 2017 at 02:50:08PM -0700, Frank Liu wrote: > > Hi there, > > > How does nginx balances traffic to upstream with different weight? If I > > have 3 servers in upstream, with weight 1, 2, 4, assuming all are > healthy, > > will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, 2, 3, 3, > 3, > > 3? > > If you want to know what your current nginx version does, it should not > be too difficult to test: > > One nginx.conf. One http section. One upstream{} listing multiple > ip:ports. One server{} which proxy_pass:es to that upstream. Multiple > server{}s, each of which listens on one ip:port, writes to a different > access_log, and does something like "return 200 ok;". > > Then for increasing numbers X, "GET /X" on the main server. Look at > the individual access log files to see which server handled /1, which > handled /2, etc. By the time you get to 700, you'll either see that > there is a reliably repeating pattern, or you'll see that it is probably > randomish. > > > If you actually care about what pattern is used; or if you want to > guarantee that the same pattern will be used in future nginx versions; > then get your preferred code written and use that instead of whatever > nginx uses. > > > If you want to know what guarantee there is that the behaviour will not > change in the future: I'd say "none", except that there is a good chance > that what is written in the documentation will be honoured. Paraphrasing > that, for the above case: for 7 requests, 1 will go to the first server; > 2 to the second; and 4 to the third. > > > If I have two servers with both weight 50, will nginx will 50 requests > > to server 1, and then 50 to server 2, or will it calculate the ration to > be > > 1:1 and send one after another? > > Same answer: it does not seem to difficult to test, if you don't want > to read the available source. > > Good luck with it! > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Apr 14 07:13:47 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 14 Apr 2017 09:13:47 +0200 Subject: weight and balancing in upstream proxy In-Reply-To: <20170413143454.GH13617@mdounin.ru> References: <20170413143454.GH13617@mdounin.ru> Message-ID: Please, enlighten us then. --- *B. R.* On Thu, Apr 13, 2017 at 4:34 PM, Maxim Dounin wrote: > Hello! > > On Thu, Apr 13, 2017 at 10:09:16AM +0200, B.R. via nginx wrote: > > > That is an interesting questions as intuitively, people could think the > > former behavior applies. > > > > If I got the source code > > http_upstream_round_robin.c#L507> > > right, and as the docs > > > > state, nginx is following a weighted round-robin > > algorithm. > > It thus means it will go over the same list of servers everytime a peer > > needs to be chosen (ie for every request), and pick the first not having > > depleted its weight allocation. > > > > To me, it would use the latter of your proposals. > > ?Please correct me if I am wrong, so incorrect information does not > > propagate too much. :o)? > > The Wikipedia link in question doesn't seem to be related to what > nginx does. > > -- > Maxim Dounin > http://nginx.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Apr 14 07:35:43 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 08:35:43 +0100 Subject: weight and balancing in upstream proxy In-Reply-To: References: <20170413234907.GB10157@daoine.org> Message-ID: <20170414073543.GE10157@daoine.org> On Thu, Apr 13, 2017 at 10:08:21PM -0700, Frank Liu wrote: Hi there, > Thanks for confirming that there is no document, and any results observed > through testing or reviewing code will not be guaranteed. I guess it is > purposely undocumented so that people won't rely on one behavior and we are > free to change. The guarantee is written in the licence, available at http://nginx.org/LICENSE. Something similar is true for every aspect of every piece of behaviour of every piece of software. If there is specific behaviour you want, you always have the option of finding someone who will commit to providing and preserving that behaviour in future versions of the nginx that you use. Or, if you know through testing and code review that the behaviour of 1.10.0 is exactly what you want, you always have the option of continuing to use that version. (Compiled with your version of a compiler and support libraries, and running with your version of an operating system and runtime libraries.) I suspect that the upstream round-robin implementation will not change in the future without good reason, just like I expect that HTTP/1.0 support will not change without good reason. But if things do change, and if my comfort depends on the old behaviour persisting, I have the freedom to ensure that the old behaviour does persist on my systems. So I'd say that if it matters, you can see what the current code does, and base your system on that. Note that if your system relies on all upstreams remaining always available, your system is probably fragile -- if you need a specific pattern of access, and one of your upstreams is unavailable when nginx wants it, the current nginx "recovery" behaviour may or may not be what you want. And *that* behaviour (I think) depends on timings of incoming requests, so may not be fully under your control anyway. Good luck with it, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Fri Apr 14 07:41:36 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 14 Apr 2017 09:41:36 +0200 Subject: upstream - behavior on pool exhaustion Message-ID: Hello, Reading from upstream docs, on upstream pool exhaustion, every backend should be tried once, and then if all fail the response should be crafted based on the one from the last server attempt. So far so good. I recently faced a server farm which implements a dull nightly restart of every node, not sequencing it, resulting in the possibility of having all nodes offline at the same time. However, I collected log entries which did not match what I was expected. For 6 backend nodes, I got: - log format: $status $body_bytes_sent $request_time $upstream_addr $upstream_response_time - log entry: 502 568 0.001 :, :, :, :, :, :, php-fpm 0.000, 0.000, 0.000, 0.000, 0.001, 0.000, 0.000 I got 7 entries for $upstream_addr & $upstream_response_time, instead of the expected 6. ?Here are the interesting parts of the configuration: upstream php-fpm { server : down; server : down; [...] server :; server :; server :; server :; server :; server :; keepalive 128; } ?server { set $fpm_pool "php-fpm$fpm_pool_ID"; [...] location ~ \.php$ { [...] fastcgi_read_timeout 600; fastcgi_keep_conn on; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; [...] fastcgi_pass $fpm_pool; } } ?The question is: php-fpm being an upstream group name, how come has it been tried as a domain name in the end? Stated otherwise, is this because the upstream group is considered 'down', thus somehow removed from the possibilities, and nginx trying one last time the name as a domain name to see if something answers? This 7th request is definitely strange to my point of view. Is it a bug or a feature? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Apr 14 07:48:37 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 14 Apr 2017 08:48:37 +0100 Subject: auth_basic and satisfy allowing all traffic In-Reply-To: <593b50077c0b6f4db932cbd90e74e564.NginxMailingListEnglish@forum.nginx.org> References: <20170414001753.GC10157@daoine.org> <593b50077c0b6f4db932cbd90e74e564.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170414074837.GF10157@daoine.org> On Thu, Apr 13, 2017 at 11:49:50PM -0400, daveyfx wrote: Hi there, > In both cases, I get a 404 response, which is to be expected as the default > doc root for nginx isn't served on my host. I should expect a 401 on the > second curl test, but I get a 404. If your test nginx.conf contains the one server{} block that handles requests on this ip:port, and that server{} block is exactly the 9 lines from the previous mail, then I think you've found a significant bug in the implementation, that does not show itself on my system. I suspect that it is more likely that the server{} that nginx is using is not the server{} that you think nginx is using to process these requests. Or that some of the configuration that you have not shown is involved. If you can show a minimal config that works, and a minimal config that fails, then identifying the differences between the two will probably reveal the fix. Good luck with it, f -- Francis Daly francis at daoine.org From ru at nginx.com Fri Apr 14 08:21:47 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 14 Apr 2017 11:21:47 +0300 Subject: upstream - behavior on pool exhaustion In-Reply-To: References: Message-ID: <20170414082147.GB95482@lo0.su> On Fri, Apr 14, 2017 at 09:41:36AM +0200, B.R. via nginx wrote: > Hello, > > Reading from upstream > > docs, on upstream pool exhaustion, every backend should be tried once, and > then if all fail the response should be crafted based on the one from the > last server attempt. > So far so good. > > I recently faced a server farm which implements a dull nightly restart of > every node, not sequencing it, resulting in the possibility of having all > nodes offline at the same time. > > However, I collected log entries which did not match what I was expected. > For 6 backend nodes, I got: > - log format: $status $body_bytes_sent $request_time $upstream_addr > $upstream_response_time > - log entry: 502 568 0.001 :, :, > :, :, :, address 6>:, php-fpm 0.000, 0.000, 0.000, 0.000, 0.001, 0.000, 0.000 > I got 7 entries for $upstream_addr & $upstream_response_time, instead of > the expected 6. > > ?Here are the interesting parts of the configuration: > upstream php-fpm { > server : down; > server : down; > [...] > server :; > server :; > server :; > server :; > server :; > server :; > keepalive 128; > } > > ?server { > set $fpm_pool "php-fpm$fpm_pool_ID"; > [...] > location ~ \.php$ { > [...] > fastcgi_read_timeout 600; > fastcgi_keep_conn on; > fastcgi_index index.php; > > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > [...] > fastcgi_pass $fpm_pool; > } > } > > ?The question is: > php-fpm being an upstream group name, how come has it been tried as a > domain name in the end? > Stated otherwise, is this because the upstream group is considered 'down', > thus somehow removed from the possibilities, and nginx trying one last time > the name as a domain name to see if something answers? > This 7th request is definitely strange to my point of view. Is it a bug or > a feature? A feature. Most $upstream_* variables are vectored ones, and the number of entries in their values corresponds to the number of tries made to select a peer. When a peer cannot be selected at all (as in your case), the status is 502 and the name equals the upstream group name. There could be several reasons why none of the peers can be selected. For example, some peers are marked "down", and other peers were failing and are now in the "unavailable" state. The number of tries is limited by the number of servers in the group, unless futher restricted by proxy_next_upstream_tries. In your case, since there are two "down" servers, and other servers are unavailable, you reach the situation when a peer cannot be selected. If you comment out the two "down" servers, and try a few requests in a row when all servers are physically unavailable, the first log entry will list all of the attempted servers, and then for the next 10 seconds (in the default config) you'll see only the upstream group name and 502 in $upstream_status, until the servers become available again (see max_fails/fail_timeout). Hope this makes things a little bit clearer. From al-nginx at none.at Fri Apr 14 08:47:27 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 14 Apr 2017 10:47:27 +0200 Subject: weight and balancing in upstream proxy In-Reply-To: References: Message-ID: <54e15d5d01c6e23113a88c69413f98bb@none.at> Hi. Am 12-04-2017 23:50, schrieb Frank Liu: > Hi, > > How does nginx balances traffic to upstream with different weight? > If I have 3 servers in upstream, with weight 1, 2, 4, assuming all are > healthy, will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, > 2, 3, 3, 3, 3? > If I have two servers with both weight 50, will nginx will 50 requests > to server 1, and then 50 to server 2, or will it calculate the ration > to be 1:1 and send one after another? You can find a explanation of the algorithm in the commit messages. http://hg.nginx.org/nginx/rev/c90801720a0c http://hg.nginx.org/nginx/rev/d05ab8793a69 http://hg.nginx.org/nginx/rev/0811376954e4 I have found this with this query. http://hg.nginx.org/nginx/log?rev=weight Have you seen this doc? http://nginx.org/en/docs/http/load_balancing.html Best Regards Aleks From r1ch+nginx at teamliquid.net Fri Apr 14 11:31:26 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 14 Apr 2017 13:31:26 +0200 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: You're correct - placing the username and password in the URI is just as safe as any other method as long as it's going over HTTPS, and the credentials should never appear in any access logs (unless you specifically choose to log the Authorization header). On Fri, Apr 14, 2017 at 6:47 AM, Ajay Garg wrote: > Hi Richard. > > You have got me thinking ... > https://username:password at 1.2.3.4/ works, even without ANY of the > "add_header" and "proxy_set_header" directives. > > So, now the only thing that worries me is security. > > http://stackoverflow.com/questions/4143196/is-get-data- > also-encrypted-in-https indicates that the URL is safe, in the sense that > "username" and "password" would not be sniffable through a > man-in-the-middle attack, right? > > Also, since 1.2.3.4 is our own server, so we are not really bothered about > GET-requests getting logged on the server, so we should be good. > > Do I make sense? > > Kindly let know your thoughts. > > > Thanks and Regards, > Ajay > > On Thu, Apr 13, 2017 at 11:07 PM, Richard Stanway < > r1ch+nginx at teamliquid.net> wrote: > >> You're missing the "Authorization" header in >> your Access-Control-Allow-Headers directive. >> >> You can alternatively pass the basic auth in your URI, eg xhr.open("GET", >> "https://username:password at 1.2.3.4/") rather than crafting it manually. >> >> On Thu, Apr 13, 2017 at 4:50 PM, Ajay Garg >> wrote: >> >>> Strange, but rebooting the machine caused the credentials-popup to be >>> seen again :-| >>> Sorry for the noise here. >>> >>> There has been some progress, but still get a "CORS preflight did not >>> succeed error". >>> Following is what I am doing. >>> >>> >>> a) >>> Following is the server-block in /etc/nginx/conf.d/default.conf :: >>> >>> ############################################################ >>> ############## >>> server { >>> >>> listen 443 ssl; >>> >>> ssl_certificate /etc/nginx/ssl/nginx.crt; >>> ssl_certificate_key /etc/nginx/ssl/nginx.key; >>> >>> add_header 'Access-Control-Max-Age' 1728000 'always'; >>> add_header 'Access-Control-Allow-Origin' $http_origin >>> 'always'; >>> add_header 'Access-Control-Allow-Credentials' 'true' >>> 'always'; >>> add_header 'Access-Control-Allow-Methods' 'GET, POST, >>> OPTIONS' 'always'; >>> add_header 'Access-Control-Allow-Headers' >>> 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,U >>> ser-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type' >>> 'always'; >>> >>> location / { >>> >>> auth_basic 'Restricted'; >>> auth_basic_user_file /etc/nginx/ssl/.htpasswd; >>> >>> proxy_set_header 'Access-Control-Max-Age' >>> 1728000; >>> proxy_set_header 'Access-Control-Allow-Origin' >>> '*'; >>> proxy_set_header >>> 'Access-Control-Allow-Credentials' 'true'; >>> proxy_set_header >>> 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; >>> proxy_set_header >>> 'Access-Control-Allow-Headers' >>> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,I >>> f-Modified-Since,Cache-Control,Content-Type'; >>> >>> proxy_pass >>> $forwarded_protocol://127.0.0.1:$forwarded_port; >>> >>> } >>> } >>> ############################################################ >>> ############## >>> >>> >>> >>> >>> b) >>> Firing the following html from firefox (sensitive information changed) :: >>> >>> ############################################################ >>> ############## >>> >>> >>> >>> >>> >>> ############################################################ >>> ############## >>> >>> >>> >>> Following is received in the firebug-console (sensitive information >>> changed) :: >>> >>> ############################################################ >>> ############## >>> GET https://23.253.207.208/ >>> uff.html (line 19) >>> Headers >>> >>> Accept >>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 >>> Accept-Encoding gzip, deflate, br >>> Accept-Language en-US,en;q=0.5 >>> Authorization Basic abcdefg >>> Cache-Control no-cache >>> Host 1.2.3.4 >>> Origin null >>> User-Agent Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:47.0) >>> Gecko/20100101 Firefox/47.0 >>> >>> >>> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >>> the remote resource at https://1.2.3.4/. (Reason: CORS preflight >>> channel did not succeed). >>> ############################################################ >>> ############## >>> >>> >>> I am beginning to believe that I am close to solving the issue (of >>> course all credit to tremendous help from this list). >>> I will be grateful for the last bit of help being received by the >>> really helpful experts here.. >>> >>> Sorry again for the noise in my previous email. >>> >>> >>> Thanks and Regards, >>> Ajay >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Regards, > Ajay > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Fri Apr 14 13:12:22 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Fri, 14 Apr 2017 18:42:22 +0530 Subject: Unable to use a GET url-param Message-ID: Hi All. When I do the following call :: https://username:password at 1.2.3.4?upstream_protocol=http I get a 500 error (on the browser-client), with the following seen in /var/log/nginx/error.log (on nginx-server) ###################################################### 2017/04/14 13:03:51 [error] 16039#16039: *1 invalid URL prefix in ":// 127.0.0.1:5000", client: 182.69.5.226, server: , request: "GET /cgi-bin/webproc HTTP/1.1", host: "1.2.3.4", referrer: " https://1.2.3.4/?upstream_protocol=http" ###################################################### Following is the server-block section in /etc/nginx/conf.d/default.conf ###################################################### server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { auth_basic 'Restricted'; auth_basic_user_file /etc/nginx/ssl/.htpasswd; proxy_pass $arg_upstream_protocol://127.0.0.1: $forwarded_port; } } ###################################################### It definitely looks that "upstream_protocol" parameter is not being picked up by $arg_upstream_protocol. What am I missing? Will be grateful for pointers. Thanks and Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajaygargnsit at gmail.com Fri Apr 14 13:13:26 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Fri, 14 Apr 2017 18:43:26 +0530 Subject: Unable to resolve the "Access-Control-Allow-Origin" issue In-Reply-To: References: Message-ID: Thanks a ton Richard !! I will ask my colleague if this works in angularjs on Monday; my gut feel is it will :) Thanks a ton guys !!! Thanks and Regards, Ajay On Fri, Apr 14, 2017 at 5:01 PM, Richard Stanway wrote: > You're correct - placing the username and password in the URI is just as > safe as any other method as long as it's going over HTTPS, and the > credentials should never appear in any access logs (unless you specifically > choose to log the Authorization header). > > On Fri, Apr 14, 2017 at 6:47 AM, Ajay Garg wrote: > >> Hi Richard. >> >> You have got me thinking ... >> https://username:password at 1.2.3.4/ works, even without ANY of the >> "add_header" and "proxy_set_header" directives. >> >> So, now the only thing that worries me is security. >> >> http://stackoverflow.com/questions/4143196/is-get-data-also- >> encrypted-in-https indicates that the URL is safe, in the sense that >> "username" and "password" would not be sniffable through a >> man-in-the-middle attack, right? >> >> Also, since 1.2.3.4 is our own server, so we are not really bothered >> about GET-requests getting logged on the server, so we should be good. >> >> Do I make sense? >> >> Kindly let know your thoughts. >> >> >> Thanks and Regards, >> Ajay >> >> On Thu, Apr 13, 2017 at 11:07 PM, Richard Stanway < >> r1ch+nginx at teamliquid.net> wrote: >> >>> You're missing the "Authorization" header in >>> your Access-Control-Allow-Headers directive. >>> >>> You can alternatively pass the basic auth in your URI, eg >>> xhr.open("GET", "https://username:password at 1.2.3.4/") rather than >>> crafting it manually. >>> >>> On Thu, Apr 13, 2017 at 4:50 PM, Ajay Garg >>> wrote: >>> >>>> Strange, but rebooting the machine caused the credentials-popup to be >>>> seen again :-| >>>> Sorry for the noise here. >>>> >>>> There has been some progress, but still get a "CORS preflight did not >>>> succeed error". >>>> Following is what I am doing. >>>> >>>> >>>> a) >>>> Following is the server-block in /etc/nginx/conf.d/default.conf :: >>>> >>>> ############################################################ >>>> ############## >>>> server { >>>> >>>> listen 443 ssl; >>>> >>>> ssl_certificate /etc/nginx/ssl/nginx.crt; >>>> ssl_certificate_key /etc/nginx/ssl/nginx.key; >>>> >>>> add_header 'Access-Control-Max-Age' 1728000 'always'; >>>> add_header 'Access-Control-Allow-Origin' $http_origin >>>> 'always'; >>>> add_header 'Access-Control-Allow-Credentials' 'true' >>>> 'always'; >>>> add_header 'Access-Control-Allow-Methods' 'GET, POST, >>>> OPTIONS' 'always'; >>>> add_header 'Access-Control-Allow-Headers' >>>> 'DNT,Access-Control-Allow-Origin,X-CustomHeader,Keep-Alive,U >>>> ser-Agent,X-Requested-With,If-Modified-Since,Cache-Control,C >>>> ontent-Type' >>>> 'always'; >>>> >>>> location / { >>>> >>>> auth_basic 'Restricted'; >>>> auth_basic_user_file /etc/nginx/ssl/.htpasswd; >>>> >>>> proxy_set_header 'Access-Control-Max-Age' >>>> 1728000; >>>> proxy_set_header 'Access-Control-Allow-Origin' >>>> '*'; >>>> proxy_set_header >>>> 'Access-Control-Allow-Credentials' 'true'; >>>> proxy_set_header >>>> 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; >>>> proxy_set_header >>>> 'Access-Control-Allow-Headers' >>>> 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,I >>>> f-Modified-Since,Cache-Control,Content-Type'; >>>> >>>> proxy_pass >>>> $forwarded_protocol://127.0.0.1:$forwarded_port; >>>> >>>> } >>>> } >>>> ############################################################ >>>> ############## >>>> >>>> >>>> >>>> >>>> b) >>>> Firing the following html from firefox (sensitive information changed) >>>> :: >>>> >>>> ############################################################ >>>> ############## >>>> >>>> >>>> >>>> >>>> >>>> ############################################################ >>>> ############## >>>> >>>> >>>> >>>> Following is received in the firebug-console (sensitive information >>>> changed) :: >>>> >>>> ############################################################ >>>> ############## >>>> GET https://23.253.207.208/ >>>> uff.html (line 19) >>>> Headers >>>> >>>> Accept >>>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 >>>> Accept-Encoding gzip, deflate, br >>>> Accept-Language en-US,en;q=0.5 >>>> Authorization Basic abcdefg >>>> Cache-Control no-cache >>>> Host 1.2.3.4 >>>> Origin null >>>> User-Agent Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:47.0) >>>> Gecko/20100101 Firefox/47.0 >>>> >>>> >>>> Cross-Origin Request Blocked: The Same Origin Policy disallows reading >>>> the remote resource at https://1.2.3.4/. (Reason: CORS preflight >>>> channel did not succeed). >>>> ############################################################ >>>> ############## >>>> >>>> >>>> I am beginning to believe that I am close to solving the issue (of >>>> course all credit to tremendous help from this list). >>>> I will be grateful for the last bit of help being received by the >>>> really helpful experts here.. >>>> >>>> Sorry again for the noise in my previous email. >>>> >>>> >>>> Thanks and Regards, >>>> Ajay >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> >> -- >> Regards, >> Ajay >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Apr 14 19:26:41 2017 From: nginx-forum at forum.nginx.org (daveyfx) Date: Fri, 14 Apr 2017 15:26:41 -0400 Subject: auth_basic and satisfy allowing all traffic In-Reply-To: <20170414074837.GF10157@daoine.org> References: <20170414074837.GF10157@daoine.org> Message-ID: <7095024bb656adc5a864b59dcbfbd952.NginxMailingListEnglish@forum.nginx.org> Hi Francis - That would have been my suspicion as well. To test that theory, I installed the same nginx 1.10.1 RPM file on a similar CentOS 6 virtual machine in my environment. This particular VM has never been used for any nginx testing, nor has it ever had nginx installed. I tested the same server configuration as your example, but the testing VM produced the same results. The satisfy/allow/deny directives allow bypassing of the basic_auth. Once those entries have been commented out, auth works as expected. Would there be additional steps involved in determining if this is, in fact, a bug? Thank you for your help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273629,273656#msg-273656 From gfrankliu at gmail.com Fri Apr 14 20:01:54 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 14 Apr 2017 13:01:54 -0700 Subject: weight and balancing in upstream proxy In-Reply-To: <54e15d5d01c6e23113a88c69413f98bb@none.at> References: <54e15d5d01c6e23113a88c69413f98bb@none.at> Message-ID: Hi Aleks, Those information are extremely helpful. Much appreciated! Regards, Frank On Fri, Apr 14, 2017 at 1:47 AM, Aleksandar Lazic wrote: > Hi. > > Am 12-04-2017 23:50, schrieb Frank Liu: > > Hi, >> >> How does nginx balances traffic to upstream with different weight? >> If I have 3 servers in upstream, with weight 1, 2, 4, assuming all are >> healthy, will nginx send traffic to server 1, 2, 3, 2, 3, 3, 3 or 1, 2, 2, >> 3, 3, 3, 3? >> If I have two servers with both weight 50, will nginx will 50 requests to >> server 1, and then 50 to server 2, or will it calculate the ration to be >> 1:1 and send one after another? >> > > You can find a explanation of the algorithm in the commit messages. > > http://hg.nginx.org/nginx/rev/c90801720a0c > http://hg.nginx.org/nginx/rev/d05ab8793a69 > http://hg.nginx.org/nginx/rev/0811376954e4 > > I have found this with this query. > > http://hg.nginx.org/nginx/log?rev=weight > > Have you seen this doc? > http://nginx.org/en/docs/http/load_balancing.html > > Best Regards > Aleks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Apr 15 01:55:20 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 15 Apr 2017 03:55:20 +0200 Subject: upstream - behavior on pool exhaustion In-Reply-To: <20170414082147.GB95482@lo0.su> References: <20170414082147.GB95482@lo0.su> Message-ID: Let me be clear here: I got 6 active servers (not marked down), and the logs show 1 attempt on each. They all failed for a known reason, and there is no problem there. Subsequently, the whole pool was 'down' and the response was 502. Everything perfectly normal so far. ?What is unclear is the feature (as you classified it) of having a fake node named after the pool appearing in the list of tried upstream servers.? It brings confusion more than anything else: having a 502 response + the list of all tried (and failed) nodes corresponding with the list of active nodes is more than enough to describe what happened. The name of the upstream group does not corresponding to any real asset, it is purely virtual classification. It thus makes no sense at all to me to have it appearing as a 7th 'node' in the list... and how do you interpret its response time (where you got also a 7th item in the list)? Moreover, it is confusing, since proxy_pass handles domain names and one could believe nginx treated the upstream group name as such. --- *B. R.* On Fri, Apr 14, 2017 at 10:21 AM, Ruslan Ermilov wrote: > On Fri, Apr 14, 2017 at 09:41:36AM +0200, B.R. via nginx wrote: > > Hello, > > > > Reading from upstream > > > > docs, on upstream pool exhaustion, every backend should be tried once, > and > > then if all fail the response should be crafted based on the one from the > > last server attempt. > > So far so good. > > > > I recently faced a server farm which implements a dull nightly restart of > > every node, not sequencing it, resulting in the possibility of having all > > nodes offline at the same time. > > > > However, I collected log entries which did not match what I was expected. > > For 6 backend nodes, I got: > > - log format: $status $body_bytes_sent $request_time $upstream_addr > > $upstream_response_time > > - log entry: 502 568 0.001 :, :, > > :, :, :, > address 6>:, php-fpm 0.000, 0.000, 0.000, 0.000, 0.001, 0.000, > 0.000 > > I got 7 entries for $upstream_addr & $upstream_response_time, instead of > > the expected 6. > > > > ?Here are the interesting parts of the configuration: > > upstream php-fpm { > > server : down; > > server : down; > > [...] > > server :; > > server :; > > server :; > > server :; > > server :; > > server :; > > keepalive 128; > > } > > > > ?server { > > set $fpm_pool "php-fpm$fpm_pool_ID"; > > [...] > > location ~ \.php$ { > > [...] > > fastcgi_read_timeout 600; > > fastcgi_keep_conn on; > > fastcgi_index index.php; > > > > include fastcgi_params; > > fastcgi_param SCRIPT_FILENAME > > $document_root$fastcgi_script_name; > > [...] > > fastcgi_pass $fpm_pool; > > } > > } > > > > ?The question is: > > php-fpm being an upstream group name, how come has it been tried as a > > domain name in the end? > > Stated otherwise, is this because the upstream group is considered > 'down', > > thus somehow removed from the possibilities, and nginx trying one last > time > > the name as a domain name to see if something answers? > > This 7th request is definitely strange to my point of view. Is it a bug > or > > a feature? > > A feature. > > Most $upstream_* variables are vectored ones, and the number of entries > in their values corresponds to the number of tries made to select a peer. > When a peer cannot be selected at all (as in your case), the status is > 502 and the name equals the upstream group name. > > There could be several reasons why none of the peers can be selected. > For example, some peers are marked "down", and other peers were failing > and are now in the "unavailable" state. > > The number of tries is limited by the number of servers in the group, > unless futher restricted by proxy_next_upstream_tries. In your case, > since there are two "down" servers, and other servers are unavailable, > you reach the situation when a peer cannot be selected. If you comment > out the two "down" servers, and try a few requests in a row when all > servers are physically unavailable, the first log entry will list all > of the attempted servers, and then for the next 10 seconds (in the > default config) you'll see only the upstream group name and 502 in > $upstream_status, until the servers become available again (see > max_fails/fail_timeout). > > Hope this makes things a little bit clearer. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Apr 15 08:25:30 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Apr 2017 09:25:30 +0100 Subject: Unable to use a GET url-param In-Reply-To: References: Message-ID: <20170415082530.GG10157@daoine.org> On Fri, Apr 14, 2017 at 06:42:22PM +0530, Ajay Garg wrote: Hi there, > When I do the following call :: > > https://username:password at 1.2.3.4?upstream_protocol=http > 2017/04/14 13:03:51 [error] 16039#16039: *1 invalid URL prefix in ":// > 127.0.0.1:5000", client: 182.69.5.226, server: , request: "GET > /cgi-bin/webproc HTTP/1.1", host: "1.2.3.4", referrer: " > https://1.2.3.4/?upstream_protocol=http" > What am I missing? The request in the log line is not the same as the first request provided. What is the output of "curl -v" on the first request? If it is not exactly what you expect, what does the nginx log say for that one request? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Apr 15 08:32:34 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Apr 2017 09:32:34 +0100 Subject: auth_basic and satisfy allowing all traffic In-Reply-To: <7095024bb656adc5a864b59dcbfbd952.NginxMailingListEnglish@forum.nginx.org> References: <20170414074837.GF10157@daoine.org> <7095024bb656adc5a864b59dcbfbd952.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170415083234.GH10157@daoine.org> On Fri, Apr 14, 2017 at 03:26:41PM -0400, daveyfx wrote: Hi there, > I tested the same server configuration as your example, but the testing VM > produced the same results. The satisfy/allow/deny directives allow > bypassing of the basic_auth. Once those entries have been commented out, > auth works as expected. > > Would there be additional steps involved in determining if this is, in fact, > a bug? In this case, I suggest building a reproducible test case. Assuming that you use "default" config files, then "nginx -V" will show information about what version you are using; "nginx -T" will show the configuration actually being used, and provide "curl -v" or "curl -i" commands that show the unexpected behaviour. nginx logs for the requests should also show what source IP address nginx thinks the requests are coming from. Copy-paste; do not re-type. Make it so that the differences between a working and a failing system are obvious. Good luck with it, f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sat Apr 15 09:17:26 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sat, 15 Apr 2017 14:47:26 +0530 Subject: Unable to use a GET url-param In-Reply-To: <20170415082530.GG10157@daoine.org> References: <20170415082530.GG10157@daoine.org> Message-ID: Hi Francis. I tried the curl method, and I happened to land on an interesting observation. a) If there is no forwarded-port in listening state (port 5000 in this case) for the upstream-server, the request suitably returns a 502 error. More importantly, the $arg_upstream_protocol does seem to be parsed properly :: ##################################################### 2017/04/15 09:08:56 [error] 16039#16039: *40 connect() failed (111: Connection refused) while connecting to upstream, client: 182.69.5.226, server: , request: "GET /?upstream_protocol=http HTTP/1.1", upstream: " http://127.0.0.1:5000/?upstream_protocol=http", host: "1.2.3.4" ##################################################### b) However, if the forwarded port is in listening state, I get the usual 500 error :: ##################################################### 2017/04/15 09:08:21 [error] 16039#16039: *37 invalid URL prefix in ":// 127.0.0.1:5000", client: 182.69.5.226, server: , request: "GET /cgi-bin/webproc HTTP/1.1", host: "1.2.3.4", referrer: " https://1.2.3.4/?upstream_protocol=http" ##################################################### Note that /cgi-bin/webproc is the default location for the upstream-server. Also, to re-iterate, following is the proxy-pass directive :: proxy_pass $arg_upstream_protocol://127. 0.0.1:$forwarded_port; So, the GET-param is being parsed fine (as evident from case a), seems I need to do some url-rewritings while the requests move to and from between nginx and upstream-server, right? On Sat, Apr 15, 2017 at 1:55 PM, Francis Daly wrote: > On Fri, Apr 14, 2017 at 06:42:22PM +0530, Ajay Garg wrote: > > Hi there, > > > When I do the following call :: > > > > https://username:password at 1.2.3.4?upstream_protocol=http > > > 2017/04/14 13:03:51 [error] 16039#16039: *1 invalid URL prefix in ":// > > 127.0.0.1:5000", client: 182.69.5.226, server: , request: "GET > > /cgi-bin/webproc HTTP/1.1", host: "1.2.3.4", referrer: " > > https://1.2.3.4/?upstream_protocol=http" > > > What am I missing? > > The request in the log line is not the same as the first request provided. > > What is the output of "curl -v" on the first request? If it is not > exactly what you expect, what does the nginx log say for that one request? > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Regards, Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Apr 15 15:20:22 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 15 Apr 2017 16:20:22 +0100 Subject: Unable to use a GET url-param In-Reply-To: References: <20170415082530.GG10157@daoine.org> Message-ID: <20170415152022.GI10157@daoine.org> On Sat, Apr 15, 2017 at 02:47:26PM +0530, Ajay Garg wrote: Hi there, > If there is no forwarded-port in listening state (port 5000 in this case) > for the upstream-server, the request suitably returns a 502 error. More > importantly, the $arg_upstream_protocol does seem to be parsed properly :: Why do you have $arg_upstream_protocol? What is its purpose? After you answer that, consider: why do you not also have $arg_forwarded_port? If the port to connect to, and the protocol to connect with, are conceptually analogous, they should probably be handled in the same way. (Set them both in maps.) > So, the GET-param is being parsed fine (as evident from case a), seems I > need to do some url-rewritings while the requests move to and from between > nginx and upstream-server, right? One request gets one response. If the response is a http 301, the next request is a whole new request that should be considered separately. If at all possible, do not design things so that you need to edit the upstream response body before sending it to the client. So: what is the output of "curl -v" on the first request? What do you want the output to be, in your design? f -- Francis Daly francis at daoine.org From ajaygargnsit at gmail.com Sun Apr 16 04:19:09 2017 From: ajaygargnsit at gmail.com (Ajay Garg) Date: Sun, 16 Apr 2017 09:49:09 +0530 Subject: Unable to use a GET url-param In-Reply-To: <20170415152022.GI10157@daoine.org> References: <20170415082530.GG10157@daoine.org> <20170415152022.GI10157@daoine.org> Message-ID: Hi Francis. Thanks for your continued help. On Sat, Apr 15, 2017 at 8:50 PM, Francis Daly wrote: > On Sat, Apr 15, 2017 at 02:47:26PM +0530, Ajay Garg wrote: > > Hi there, > > > If there is no forwarded-port in listening state (port 5000 in this case) > > for the upstream-server, the request suitably returns a 502 error. More > > importantly, the $arg_upstream_protocol does seem to be parsed properly > :: > > Why do you have $arg_upstream_protocol? What is its purpose? > > After you answer that, consider: why do you not also have > $arg_forwarded_port? > > If the port to connect to, and the protocol to connect with, are > conceptually analogous, they should probably be handled in the same way. > Our architecture is as follows :: Proxy-Server <==> Gateway <==> End-Server Proxy-Server and Gateway are connected via a ssh-reverse-tunnel. The port over which they are connected remains the same, as long as the Gateway is same. So, $forwarded_port can be safely set in the map. Gateway and End-Server communicate via the "other end" of the ssh-reverse-tunnel. The End-Server here might change, and so the communication can either be over http or https. This information is passed as a GET-param, when making the request to the Proxy-Server. So, $arg_upstream_protocol comes into picture. > (Set them both in maps.) > I have already tried this via map $remote_user $forwarded_protocol { ajay $arg_upstream_protocol } .... ..... proxy_pass $forwarded_protocol://127.0.0.1:$forwarded_port; but I get the same results as per my previous emails. > > > So, the GET-param is being parsed fine (as evident from case a), seems I > > need to do some url-rewritings while the requests move to and from > between > > nginx and upstream-server, right? > > One request gets one response. If the response is a http 301, the next > request is a whole new request that should be considered separately. > > If at all possible, do not design things so that you need to edit the > upstream response body before sending it to the client. > > So: what is the output of "curl -v" on the first request? > Following is received :: ##################################################### curl -v -k https://ajay:garg at 1.2.3.4/?upstream_protocol=http * Hostname was NOT found in DNS cache * Trying 1.2.3.4... * Connected to 1.2.3.4 (1.2.3.4) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS Unknown, Unknown (22): * SSLv3, TLS handshake, Client hello (1): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Server hello (2): * SSLv2, Unknown (22): * SSLv3, TLS handshake, CERT (11): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Server key exchange (12): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Server finished (14): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Client key exchange (16): * SSLv2, Unknown (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Finished (20): * SSLv2, Unknown (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv2, Unknown (22): * SSLv3, TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * Server certificate: * subject: C=IN; ST=Delhi; L=Delhi; O=Home; OU=Home; CN=www.home.com; emailAddress=support at home.com * start date: 2017-04-09 03:53:25 GMT * expire date: 2027-04-07 03:53:25 GMT * issuer: C=IN; ST=Delhi; L=Delhi; O=Home; OU=Home; CN=www.home.com; emailAddress=support at home.com * SSL certificate verify result: self signed certificate (18), continuing anyway. * Server auth using Basic with user 'ajay' * SSLv2, Unknown (23): > GET /?upstream_protocol=http HTTP/1.1 > Authorization: Basic abcdefg > User-Agent: curl/7.37.1 > Host: 1.2.3.4 > Accept: */* > * SSLv2, Unknown (23): < HTTP/1.1 200 Ok * Server nginx/1.11.13 is not blacklisted < Server: nginx/1.11.13 < Date: Sun, 16 Apr 2017 03:42:22 GMT < Content-Type: text/html; charset=utf-8 < Content-Length: 75 < Connection: keep-alive < Last-Modified: Sat, 08 Aug 2015 04:40:50 GMT <