From defan at nginx.com Thu Dec 1 09:13:45 2016 From: defan at nginx.com (Andrei Belov) Date: Thu, 1 Dec 2016 12:13:45 +0300 Subject: Drupal 7, nginx with ModSecurity - How to resolve that 404 error page please? In-Reply-To: <59045150-af5f-1d02-9560-07d84a8da026@secit.sk> References: <59045150-af5f-1d02-9560-07d84a8da026@secit.sk> Message-ID: <1D4DA085-DE51-4034-A617-15CE3FF274D9@nginx.com> Hi Matej, > On 29 Nov 2016, at 11:08, Matej Zuz??k wrote: > > Hello all, > > I have installed Drupal 7 on latest version of Nginx web server which > was compiled with support of ModSecurity module. I have activated core > OWASP rule set. But when I active ModSecurity in my virtual host config > file for my Drupal 7 web I do not login, register or reset password with > this error in log: > > [error] 11158#0: *1 open() "/var/www/MY_WEBSITE/node" failed (2: No such > file or directory), client: IP, server: MY_SERVER, request: "POST > /node?destination=node HTTP/1.1", host: "MY_WEBSITE", referrer: > "http://MY_WEBSITE/" > > And client gets 404 error page. > > I applied these practices > https://geekflare.com/modsecurity-owasp-core-rule-set-nginx/ and > https://www.netnea.com/cms/2016/11/22/securing-drupal-with-modsecurity-and-the-core-rule-set-crs3/ > > When I change SecRuleEngine from "On" to "DetectionOnly" result is the > same, For correct operation I have to "switch off" ModSecurity in > virtual host config for domain. > > So please have you any advices for solving this problem? What version of ModSecurity are you using with nginx? ModSecurity 2.x with its "standalone" mode is somewhat outdated. Currently there are libmodsecurity (aka ModSecurity 3.x) project [1] and special nginx connector module [2] that should be used instead. Also it is a good idea to report ModSecurity related issues to the corresponding github projects. [1] https://github.com/SpiderLabs/ModSecurity/tree/v3/master [2] https://github.com/SpiderLabs/ModSecurity-nginx/tree/master From mzuzcak at secit.sk Thu Dec 1 10:35:26 2016 From: mzuzcak at secit.sk (=?UTF-8?B?TWF0ZWogWnV6xI3DoWs=?=) Date: Thu, 1 Dec 2016 11:35:26 +0100 Subject: Drupal 7, nginx with ModSecurity - How to resolve that 404 error page please? In-Reply-To: <1D4DA085-DE51-4034-A617-15CE3FF274D9@nginx.com> References: <59045150-af5f-1d02-9560-07d84a8da026@secit.sk> <1D4DA085-DE51-4034-A617-15CE3FF274D9@nginx.com> Message-ID: Hello Andrei, thank you for your reply. I found that it is know bug if ModSecurity works in reverse proxy mode. So I will try use special nginx connector module as you say. Best Regrads Matej Zuzcak D?a 1.12.2016 o 10:13 Andrei Belov nap?sal(a): > Hi Matej, > >> On 29 Nov 2016, at 11:08, Matej Zuz??k wrote: >> >> Hello all, >> >> I have installed Drupal 7 on latest version of Nginx web server which >> was compiled with support of ModSecurity module. I have activated core >> OWASP rule set. But when I active ModSecurity in my virtual host config >> file for my Drupal 7 web I do not login, register or reset password with >> this error in log: >> >> [error] 11158#0: *1 open() "/var/www/MY_WEBSITE/node" failed (2: No such >> file or directory), client: IP, server: MY_SERVER, request: "POST >> /node?destination=node HTTP/1.1", host: "MY_WEBSITE", referrer: >> "http://MY_WEBSITE/" >> >> And client gets 404 error page. >> >> I applied these practices >> https://geekflare.com/modsecurity-owasp-core-rule-set-nginx/ and >> https://www.netnea.com/cms/2016/11/22/securing-drupal-with-modsecurity-and-the-core-rule-set-crs3/ >> >> When I change SecRuleEngine from "On" to "DetectionOnly" result is the >> same, For correct operation I have to "switch off" ModSecurity in >> virtual host config for domain. >> >> So please have you any advices for solving this problem? > What version of ModSecurity are you using with nginx? > > ModSecurity 2.x with its "standalone" mode is somewhat outdated. > > Currently there are libmodsecurity (aka ModSecurity 3.x) project [1] and special nginx connector module [2] > that should be used instead. > > Also it is a good idea to report ModSecurity related issues to the corresponding github projects. > > > [1] https://github.com/SpiderLabs/ModSecurity/tree/v3/master > [2] https://github.com/SpiderLabs/ModSecurity-nginx/tree/master > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Dec 1 14:14:40 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2016 17:14:40 +0300 Subject: How to config the path for the dynamic module of nginx to access? In-Reply-To: References: Message-ID: <20161201141440.GG8196@mdounin.ru> Hello! On Wed, Nov 30, 2016 at 12:15:46PM -0800, Ya-wen Lin wrote: > Hi, > > I've read the related documents and tried googling but the results are all > about how to assign the path to the module's implementation c code for > nginx to compile a module. > > My module will read/write files upon end-user's request, and my question is > how to set the path so that my module will directly read/write in that > directory. > > I found that without extra settings, my module will read/write the folder I > initiate nginx. For example, if I run sudo nginx where pwd is /Users/me/ > html/data, then the dynamic module will read/write files from > /Users/me/html/data. > > What would be the best practice to set the path to access data for my > module? Paths to modules as specified in the load_module directive are resolved from --prefix as set during ./configure, much like many other paths in nginx. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 1 14:17:50 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2016 17:17:50 +0300 Subject: proxy_ignore_client_abort not working on linux In-Reply-To: <2851681161d8f01648a225ec9feae946.NginxMailingListEnglish@forum.nginx.org> References: <2851681161d8f01648a225ec9feae946.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161201141750.GH8196@mdounin.ru> Hello! On Wed, Nov 30, 2016 at 06:31:51PM -0500, badtzhou wrote: > I can't get proxy_ignore_client_abort to work correctly on linux. The > default option is off. But when I proxy a large cacheable file, nginx > doesn't close the backend connection right away when client abort the > request. The backend connection was not closed until the entire file has > been buffered and cached. > Any idea why? Thanks! That's expected. When caching and/or storing a file with proxy_store, nginx always ignores client aborts and finishes loading a file. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Dec 1 16:44:44 2016 From: nginx-forum at forum.nginx.org (nemster) Date: Thu, 01 Dec 2016 11:44:44 -0500 Subject: $connections_waiting ever increasing Message-ID: hi all! i see ever increasing numbers for $connections_waiting in the status plugin. nginx06% curl 'localhost/status' Active connections: 480810 server accepts handled requests 53456157 53456157 92142205 Reading: 5 Writing: 8435 Waiting: 471206 nginx06% sudo netstat -alpn | grep "^tcp" | wc -l 28392 that looks fishy... nginx version: nginx/1.11.5 any hints? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271269,271269#msg-271269 From mdounin at mdounin.ru Thu Dec 1 17:19:17 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2016 20:19:17 +0300 Subject: $connections_waiting ever increasing In-Reply-To: References: Message-ID: <20161201171916.GM8196@mdounin.ru> Hello! On Thu, Dec 01, 2016 at 11:44:44AM -0500, nemster wrote: > hi all! > i see ever increasing numbers for $connections_waiting in the status > plugin. > > nginx06% curl 'localhost/status' > Active connections: 480810 > server accepts handled requests > 53456157 53456157 92142205 > Reading: 5 Writing: 8435 Waiting: 471206 > nginx06% sudo netstat -alpn | grep "^tcp" | wc -l > 28392 > > that looks fishy... > > nginx version: nginx/1.11.5 > > any hints? nginx -V, config? -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Dec 1 17:26:50 2016 From: nginx-forum at forum.nginx.org (CeeGeeDev) Date: Thu, 01 Dec 2016 12:26:50 -0500 Subject: Nginx headers to be set at the time of request processing In-Reply-To: <20161129125801.GA2958@daoine.org> References: <20161129125801.GA2958@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Mon, Nov 28, 2016 at 06:56:29PM -0500, CeeGeeDev wrote: > > Hi there, > > Thanks for expanding on what you are doing. > > I confess that I am still not sure what it is; but that's ok -- I > don't need to understand. Some of our team members might say the same, so appreciate your patience. :) So, other web proxies/servers e.g. Apache and others, when you indicate a "set header"-style directive in configuration, apparently (I am told) any custom modules/filters you may have automatically receive as input a merged set of request headers: the "real" ones and the "fake" ones set in the configuration files. So there is often in fact no distinguishing between the two. This allows (often complex) configuration rules to drive request header values (the ones that need to be modified for whatever business logic reasons), both in terms of module business logic and altering upstream view of the request headers. So it seems like in nginx we're only getting one-half of the equation, and there seemed no direct way (without incurring the overhead of a server redirect or something like that) to tell the module about a change to a request header in the config file. This is the gap we're trying to solve: to unify the view of the headers from the perspective of both the upstream and the custom request processing module. If they see different views/values of the headers, the business logic will make the wrong decisions. > > So for the benefit of the community: our plan is to implement a > > custom configuration directive in our http module to allow us to inform > > ourselves about various header overrides made in the nginx > > configuration file that should override various request headers in the > > actual request data structures during request processing in our http > > module code (in our business logic only... will have no effect on > > downstream request header values). There seems to be no built-in > > alternative for nginx custom http module developers (apologies if this > > question is better suited to the development list), at least none that > > we can find documented > anywhere. > > location /test/ { > proxy_set_header a a; > fastcgi_param b b; > my_directive_a c c; > my_directive_b d d; > } > > For a request that is handled in that location, three of those > directives could send some "extra" information to the upstream, > if a suitable "*_pass" directive were active. No "*_pass" directive > is active, so those three directives are effectively unused for this > request. > > What do you expect your module to report for a request handled in this > location? Right, so in other words, our dev and actually the provisioning team who discovered this (all more familiar with other web servers and new to nginx) were surprised when the module wasn't getting the modified headers from the stock directives like proxy_set_header. We just kind of assumed that if the header was "altered", we would see it (granted in nginx, there are multiple directives affecting headers, so I don't mean to imply we were/are really sure what would/should happen if multiple directives affected the same header, etc). We're more familiar with other web servers where this is usually only a single "change this header" directive, and it clearly makes sense to alter that header value "globally". The nice thing of this custom directive approach is that we have flexibility: a) both the upstream and the module are told the modified header, b) the upstream is not told but the module is or c) the upstream is told but the module is not. But it would have been sufficient to only support (a), and always set both the standard proxy_set_header / fastcgi_param etc and the my_directive_x to the same value, and then everyone will think the request header has been modified to this new value. That's the most common case anyway. > That may make it clearer what you are trying to achieve. > > (If it does not, feel free to ignore this mail.) > > Thanks, > > f > -- > Francis Daly francis at daoine.org Further, we did discover "more_set_input_headers" from ngx_headers_more (a public custom module) that does something close to what we need, but our legal staff always objects to using various modules for all their fun legal reasons. And we decided we liked the flexibility of the custom directive in the end anyway. So something to consider perhaps as a feature enhancement for nginx? Or at least a documented approach. Appreciate your consideration anyway and certainly if there is a feature somewhere we aren't aware of that might accomplish the same thing, we would be interested. Also, we're the same team interested in a published versioned custom module true pluggable dynamic-linked module (i.e. no re-compile of nginx server code required) versioned nginx SDK interface, which I understand has been discussed but is not on immediate roadmap. So I imagine in a true module SDK, perhaps this type of functionality would make more sense? Anyway, just a thought. Thank you very much Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271147,271273#msg-271273 From nginx-forum at forum.nginx.org Thu Dec 1 17:48:32 2016 From: nginx-forum at forum.nginx.org (nemster) Date: Thu, 01 Dec 2016 12:48:32 -0500 Subject: $connections_waiting ever increasing In-Reply-To: <20161201171916.GM8196@mdounin.ru> References: <20161201171916.GM8196@mdounin.ru> Message-ID: TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/bin/nginx --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --user=http --group=http --http-log-path=/var/log/nginx/access.log --error-log-path=stderr --http-client-body-temp-path=/var/lib/nginx/client-body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-mail --with-mail_ssl_module --with-pcre-jit --with-file-aio --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_realip_module --with-http_v2_module --with-http_ssl_module --with-http_stub_status_module --with-http_addition_module --with-http_degradation_module --with-http_flv_module --with-http_mp4_module --with-http_secure_link_module --with-http_sub_module --with-http_geoip_module --with-stream --with-threads --add-module=../naxsi/naxsi_src --add-module=../headers-more-nginx-module --add-module=../nginx-ct --add-module=../lua-nginx-module relevant config details: worker_processes auto; worker_rlimit_nofile 40000; events { worker_connections 40000; use epoll; multi_accept on; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 5; types_hash_max_size 2048; server_tokens off; location = /status { # Turn on stats stub_status on; access_log off; } .... upstream servers and stuff } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271269,271275#msg-271275 From arwin at petabi.com Thu Dec 1 18:08:55 2016 From: arwin at petabi.com (Ya-wen Lin) Date: Thu, 1 Dec 2016 10:08:55 -0800 Subject: How to config the path for the dynamic module of nginx to access? In-Reply-To: <20161201141440.GG8196@mdounin.ru> References: <20161201141440.GG8196@mdounin.ru> Message-ID: Thanks for your reply. I've already have load_module /usr/local/etc/nginx/ngx_http_my_module.so; in nginx.conf But that "ngx_http_my_module.so" will write out some files, I'd like to config the path for those generated files. Currently the path seems to be the working folder I start nginx service. On Thu, Dec 1, 2016 at 6:14 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 30, 2016 at 12:15:46PM -0800, Ya-wen Lin wrote: > > > Hi, > > > > I've read the related documents and tried googling but the results are > all > > about how to assign the path to the module's implementation c code for > > nginx to compile a module. > > > > My module will read/write files upon end-user's request, and my question > is > > how to set the path so that my module will directly read/write in that > > directory. > > > > I found that without extra settings, my module will read/write the > folder I > > initiate nginx. For example, if I run sudo nginx where pwd is /Users/me/ > > html/data, then the dynamic module will read/write files from > > /Users/me/html/data. > > > > What would be the best practice to set the path to access data for my > > module? > > Paths to modules as specified in the load_module directive are > resolved from --prefix as set during ./configure, much like many > other paths in nginx. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 1 18:12:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2016 21:12:34 +0300 Subject: $connections_waiting ever increasing In-Reply-To: References: <20161201171916.GM8196@mdounin.ru> Message-ID: <20161201181234.GO8196@mdounin.ru> Hello! On Thu, Dec 01, 2016 at 12:48:32PM -0500, nemster wrote: > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf > --sbin-path=/usr/bin/nginx --pid-path=/run/nginx.pid > --lock-path=/run/lock/nginx.lock --user=http --group=http > --http-log-path=/var/log/nginx/access.log --error-log-path=stderr > --http-client-body-temp-path=/var/lib/nginx/client-body > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-mail > --with-mail_ssl_module --with-pcre-jit --with-file-aio > --with-http_dav_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_realip_module > --with-http_v2_module --with-http_ssl_module --with-http_stub_status_module > --with-http_addition_module --with-http_degradation_module > --with-http_flv_module --with-http_mp4_module --with-http_secure_link_module > --with-http_sub_module --with-http_geoip_module --with-stream --with-threads > --add-module=../naxsi/naxsi_src --add-module=../headers-more-nginx-module > --add-module=../nginx-ct --add-module=../lua-nginx-module Are you able to reproduce the problem without 3rd party modules? -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 1 18:17:36 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2016 21:17:36 +0300 Subject: How to config the path for the dynamic module of nginx to access? In-Reply-To: References: <20161201141440.GG8196@mdounin.ru> Message-ID: <20161201181736.GP8196@mdounin.ru> Hello! On Thu, Dec 01, 2016 at 10:08:55AM -0800, Ya-wen Lin wrote: > Thanks for your reply. > I've already have > load_module /usr/local/etc/nginx/ngx_http_my_module.so; > in nginx.conf > But that "ngx_http_my_module.so" will write out some files, I'd like to > config the path for those generated files. > Currently the path seems to be the working folder I start nginx service. If you use paths in your module you are expected to handle them much like the nginx itself, depending on context: - either relative to prefix, like load_module does; - or relative to conf prefix, like include does; - or relative to document root, like serving static files. First two variants are usually handled at configuration parsing time using the ngx_conf_full_name() function. Last one is usually done with the ngx_http_map_uri_to_path() function. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Dec 1 20:43:44 2016 From: nginx-forum at forum.nginx.org (nemster) Date: Thu, 01 Dec 2016 15:43:44 -0500 Subject: $connections_waiting ever increasing In-Reply-To: <20161201181234.GO8196@mdounin.ru> References: <20161201181234.GO8196@mdounin.ru> Message-ID: i think i solved it, it was due to processes crashing. i ran on an non optimized kernel on ec2. booting with the appropriate kernel seems to have solved it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271269,271283#msg-271283 From nginx at 2xlp.com Thu Dec 1 21:33:02 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Thu, 1 Dec 2016 16:33:02 -0500 Subject: SNI and certs. In-Reply-To: <05a0fe3d-325f-6e35-2725-6d6cd5a1c9a2@greengecko.co.nz> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> <05a0fe3d-325f-6e35-2725-6d6cd5a1c9a2@greengecko.co.nz> Message-ID: <1E8FD29E-D95E-4202-A816-7BDCD2C91993@2xlp.com> On Nov 30, 2016, at 5:09 PM, steve wrote: > Well, no as I've fixed this. However, if you have a probe for site x on https: and it doesn't exist, then the default https site for that IP address will be returned. Depending on configuration, it may still be attributed to the original search domain. I don't understand why people keep trying to shoot me down on this! This isn't describing a problem with search engines -- you mis-configured nginx, and it is serving content for the default site on both an IP address and domain because you don't have a failover properly configured. Adding certificates to other domains won't solve this, because you don't have a default behavior. Stop serving content on the IP address, and you won't have a problem anymore. Create an initial default server for failover on the ip address, and have it 400 everything. Do it for http and https. For https you can use a self-signed cert; it doesn't matter as you only need to be a valid protocol. # failover http server server { listen 80 default_server; server_name _; location / { return 400 "redirect expected\n"; } } # failover https server server { listen 443 default_server; server_name _; location / { return 400 "redirect expected\n"; } ssl on; # a self-signed cert is fine here } # configured servers server { listen 80; server_name example.com; location / { return 200 "ok\n"; } } server { listen 443; server_name example.com; location / { return 200 "ok\n"; } ssl on; // your cert here } -------------- next part -------------- An HTML attachment was scrubbed... URL: From boxartst at yandex.ru Fri Dec 2 09:28:38 2016 From: boxartst at yandex.ru (boxartst at yandex.ru) Date: Fri, 02 Dec 2016 12:28:38 +0300 Subject: FIPS compliance Message-ID: <10107871480670918@web11h.yandex.ru> Hi! I would like to know if it's possible to enable FIPS_mode in Nginx? Seems like I'd have to change Nginx source code and then compile it against fips enabled openssl. Here is a link to fips userguide https://www.openssl.org/docs/fips/UserGuide-2.0.pdf . Any help would be very much appreciated. Thanks Artem From wangxiaochen0 at gmail.com Fri Dec 2 17:36:11 2016 From: wangxiaochen0 at gmail.com (Xiaochen Wang) Date: Sat, 3 Dec 2016 01:36:11 +0800 Subject: [ANN] Tengine-2.2.0 released Message-ID: Hi forks, We are very excited to announce that Tengine-2.2.0 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine/releases/tag/tengine-2.2.0 or download the tarball directly: http://tengine.taobao.org/download/tengine-2.2.0.tar.gz The full changelog is as follows: *) Security: a segmentation fault might occur in a worker process while writing a specially crafted request body to a temporary file (CVE-2016-4450) (0x7E) *) Feature: the "force_exit" directive. (aholic, chobits) *) Feature: debug pool module which can get memory usage of nginx memory pool. (chobits) *) Change: merged HTTP/2 module, SPDY module is removed. (PeterDaveHello) *) Change: official nginx syslog support, tengine syslog support is removed. *) Change: merged changes from nginx-1.8.1. (lhanjian, magicbear, chobits) *) Change: support for EPOLL_EXCLUSIVE. (cfsego) *) Change: export api: ngx_http_upstream_check_upstream_down. (detailyang) *) Change: disable "check_keepalive_requests" feature for TCP health check. (cynron) *) Change: updated reqstatus module. (cfsego) *) Bugfix: remove duplicate code in ngx_http_named_location (innomentats) *) Bugfix: fixed bug of session-sticky module. (detailyang) *) Bugfix: fixed bug of resolve.conf parser. (zuopucuen) *) Bugfix: fixed the compile warning of tfs module. (monadbobo) *) Bugfix: fixed a segmentation fault of dynamic_resolver feature when variable is used in proxy_pass directive. (chobits) *) Bugfix: fixed bug of invalid Set-Cookie value in session-sticky module. (YanagiEiichi) *) Bugfix: fixed bug of uninitialized 'cf' variable in dyups module. (wangfakang) *) Bugfix: fixed bug of duplicate peers in health check module. (FqqCS, taoyuanyuan) *) Bugfix: fixed bug of wrong javascript content-type in concat module. (IYism) See our website for more details: http://tengine.taobao.org Have fun! -------------- next part -------------- An HTML attachment was scrubbed... URL: From fam6837 at gmail.com Sat Dec 3 12:19:44 2016 From: fam6837 at gmail.com (Musta Fa) Date: Sat, 3 Dec 2016 12:19:44 +0000 Subject: location and file extension regex Message-ID: im trying to create some regex just to match before second slash and only. and allow all subfolders: location ~ ^/([^/])+\.(tpl|xml)$ { return 404; } these files are located in root folder, and i dont want them to be downloaded, but other files in subfolder are downloadable. not sure why it not working? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Dec 3 14:07:36 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Dec 2016 14:07:36 +0000 Subject: location and file extension regex In-Reply-To: References: Message-ID: <20161203140736.GB2958@daoine.org> On Sat, Dec 03, 2016 at 12:19:44PM +0000, Musta Fa wrote: Hi there, > im trying to create some regex just to match before second slash and only. > and allow all subfolders: > > location ~ ^/([^/])+\.(tpl|xml)$ { return 404; } > > these files are located in root folder, and i dont want them to be > downloaded, > but other files in subfolder are downloadable. > > not sure why it not working? Why do you think it is not working? What request do you make that you want to match this location, but it does not? What request do you make that you want not to match this location, but it does? Test: == server { listen 8888; location ~ ^/([^/])+\.(tpl|xml)$ { return 200 "Did match: $uri\n"; } location / { return 200 "Did not match: $uri\n"; } } == $ curl http://127.0.0.1:8888/abc.xml Did match: /abc.xml $ curl http://127.0.0.1:8888/abc/def.xml Did not match: /abc/def.xml $ curl http://127.0.0.1:8888/abc.txt Did not match: /abc.txt f -- Francis Daly francis at daoine.org From fam6837 at gmail.com Sat Dec 3 14:27:50 2016 From: fam6837 at gmail.com (Musta Fa) Date: Sat, 3 Dec 2016 14:27:50 +0000 Subject: location and file extension regex In-Reply-To: <20161203140736.GB2958@daoine.org> References: <20161203140736.GB2958@daoine.org> Message-ID: while i request files http://domain.com/config.xml or http://domain.com/include/config.xml both files downloaded, which is not good, simple ~* /\.(tpl|xml)$ {return 404;} works perfect but blocks files everywhere. On Sat, Dec 3, 2016 at 2:07 PM, Francis Daly wrote: > On Sat, Dec 03, 2016 at 12:19:44PM +0000, Musta Fa wrote: > > Hi there, > > > im trying to create some regex just to match before second slash and > only. > > and allow all subfolders: > > > > location ~ ^/([^/])+\.(tpl|xml)$ { return 404; } > > > > these files are located in root folder, and i dont want them to be > > downloaded, > > but other files in subfolder are downloadable. > > > > not sure why it not working? > > Why do you think it is not working? > > What request do you make that you want to match this location, but it > does not? > > What request do you make that you want not to match this location, > but it does? > > Test: > > == > server { > listen 8888; > location ~ ^/([^/])+\.(tpl|xml)$ { return 200 "Did match: $uri\n"; } > location / { return 200 "Did not match: $uri\n"; } > } > == > > $ curl http://127.0.0.1:8888/abc.xml > Did match: /abc.xml > $ curl http://127.0.0.1:8888/abc/def.xml > Did not match: /abc/def.xml > $ curl http://127.0.0.1:8888/abc.txt > Did not match: /abc.txt > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Dec 3 14:54:48 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Dec 2016 14:54:48 +0000 Subject: location and file extension regex In-Reply-To: References: <20161203140736.GB2958@daoine.org> Message-ID: <20161203145448.GC2958@daoine.org> On Sat, Dec 03, 2016 at 02:27:50PM +0000, Musta Fa wrote: Hi there, > while i request files > http://domain.com/config.xml > or > http://domain.com/include/config.xml > both files downloaded, which is not good, When I do it: $ curl http://127.0.0.1:8888/config.xml Did match: /config.xml $ curl http://127.0.0.1:8888/include/config.xml Did not match: /include/config.xml The first matches (and so is blocked), and the second does not match (and so it allowed). I think that that is what you want? Either you are not using the configuration you think you are using; or you have other configuration that you are not showing. f -- Francis Daly francis at daoine.org From fam6837 at gmail.com Sat Dec 3 15:30:15 2016 From: fam6837 at gmail.com (Musta Fa) Date: Sat, 3 Dec 2016 15:30:15 +0000 Subject: location and file extension regex In-Reply-To: <20161203145448.GC2958@daoine.org> References: <20161203140736.GB2958@daoine.org> <20161203145448.GC2958@daoine.org> Message-ID: oh this is insane now i open chrome incognito window and it works!! how is that even possible? how can browser still downloading this file? while new incognito session obeys nginx rules?? On Sat, Dec 3, 2016 at 2:54 PM, Francis Daly wrote: > On Sat, Dec 03, 2016 at 02:27:50PM +0000, Musta Fa wrote: > > Hi there, > > > while i request files > > http://domain.com/config.xml > > or > > http://domain.com/include/config.xml > > both files downloaded, which is not good, > > When I do it: > > $ curl http://127.0.0.1:8888/config.xml > Did match: /config.xml > $ curl http://127.0.0.1:8888/include/config.xml > Did not match: /include/config.xml > > The first matches (and so is blocked), and the second does not match > (and so it allowed). I think that that is what you want? > > Either you are not using the configuration you think you are using; > or you have other configuration that you are not showing. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sun Dec 4 16:03:23 2016 From: r at roze.lv (Reinis Rozitis) Date: Sun, 4 Dec 2016 18:03:23 +0200 Subject: SNI and certs. In-Reply-To: <1E8FD29E-D95E-4202-A816-7BDCD2C91993@2xlp.com> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> <05a0fe3d-325f-6e35-2725-6d6cd5a1c9a2@greengecko.co.nz> <1E8FD29E-D95E-4202-A816-7BDCD2C91993@2xlp.com> Message-ID: <005901d24e47$f0f52d50$d2df87f0$@roze.lv> > Create an initial default server for failover on the ip address, and have it 400 everything. Do it for http and https. For https you can use a self-signed cert; it doesn't matter as you only need to be a valid protocol. > # failover http server > # failover https server You don't even need two server blocks single is enough: server { listen 80 default_server; listen 443 ssl default_server; } With whatever logic you want - either redirect to your preferred/main domain or show some generic page or error code (if you don't add anything nginx will use the default root and display the welcome page). In case of https I don't even think it makes sense to provide any certificates (even self-signed). Without those the connection will/should be just terminated because of peer not providing any certificates and self-signed certs shouldn't be validated (otherways there is a major flaw) by clients/crawlers either. rr From nginx at 2xlp.com Sun Dec 4 21:12:33 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Sun, 4 Dec 2016 16:12:33 -0500 Subject: SNI and certs. In-Reply-To: <005901d24e47$f0f52d50$d2df87f0$@roze.lv> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> <05a0fe3d-325f-6e35-2725-6d6cd5a1c9a2@greengecko.co.nz> <1E8FD29E-D95E-4202-A816-7BDCD2C91993@2xlp.com> <005901d24e47$f0f52d50$d2df87f0$@roze.lv> Message-ID: On Dec 4, 2016, at 11:03 AM, Reinis Rozitis wrote: > In case of https I don't even think it makes sense to provide any certificates (even self-signed). > Without those the connection will/should be just terminated because of peer not providing any certificates and self-signed certs shouldn't be validated (otherways there is a major flaw) by clients/crawlers either. I prefer a self-signed (or other somewhat valid) cert because it lets me test the configuration easier (ie, it's broken in the correct way), and most automated monitoring services can be configured to accept it to test a "pass". From steven.hartland at multiplay.co.uk Sun Dec 4 21:39:59 2016 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Sun, 4 Dec 2016 21:39:59 +0000 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail Message-ID: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> We've used nginx for years and never had an issue with nginx upgrade until today where the upgrade command ran but almost instantly after the new process exited. /usr/local/etc/rc.d/nginx upgrade Performing sanity check on nginx configuration: nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful Upgrading nginx binary: Stopping old binary: In the default nginx log we had: 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: Address already in use) nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: Address already in use) nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: Address already in use) nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: Address already in use) nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: Address already in use) nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) 2016/12/04 21:18:22 [emerg] 25435#0: still could not bind() nginx: [emerg] still could not bind() Running the start just after resulted in a running version but is obviously unexpected to have upgrade result in a failure. I believe the change to add a localhost bind to the server in question was relatively recent so I suspect it has something to do with that. The config for this is simply: server { listen 127.0.0.1:81; server_name localhost; location /status { stub_status; } } The upgrade in this case was: nginx: 1.10.1_1,2 -> 1.10.2_2,2 Now this server is running under FreeBSD in a jail (10.2-RELEASE) and it has 127.0.0.1 available yet it seems nginx has incorrectly bound the address: netstat -na | grep LIST | grep 81 tcp4 0 0 10.10.96.146.81 *.* LISTEN sockstat | grep :81 www nginx 25666 25 tcp4 10.10.96.146:81 *:* www nginx 25665 25 tcp4 10.10.96.146:81 *:* www nginx 25664 25 tcp4 10.10.96.146:81 *:* www nginx 25663 25 tcp4 10.10.96.146:81 *:* www nginx 25662 25 tcp4 10.10.96.146:81 *:* www nginx 25661 25 tcp4 10.10.96.146:81 *:* www nginx 25660 25 tcp4 10.10.96.146:81 *:* www nginx 25659 25 tcp4 10.10.96.146:81 *:* www nginx 25658 25 tcp4 10.10.96.146:81 *:* www nginx 25657 25 tcp4 10.10.96.146:81 *:* www nginx 25656 25 tcp4 10.10.96.146:81 *:* www nginx 25655 25 tcp4 10.10.96.146:81 *:* www nginx 25654 25 tcp4 10.10.96.146:81 *:* www nginx 25653 25 tcp4 10.10.96.146:81 *:* www nginx 25652 25 tcp4 10.10.96.146:81 *:* www nginx 25651 25 tcp4 10.10.96.146:81 *:* root nginx 25650 25 tcp4 10.10.96.146:81 *:* ifconfig lo0 lo0: flags=8049 metric 0 mtu 16384 options=600003 inet 127.0.0.1 netmask 0xffffffff So it looks like nginx is incorrectly binding which is resulting in the issue with upgrade. Anyone seen this before? I've confirmed nginx is responding correctly on 127.0.0.1: lwp-request http://127.0.0.1:81/status Active connections: 1077 server accepts handled requests 31516 31516 90387 Reading: 0 Writing: 5 Waiting: 1071 Regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 5 13:27:07 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Dec 2016 16:27:07 +0300 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> References: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> Message-ID: <20161205132706.GF18639@mdounin.ru> Hello! On Sun, Dec 04, 2016 at 09:39:59PM +0000, Steven Hartland wrote: > We've used nginx for years and never had an issue with nginx upgrade > until today where the upgrade command ran but almost instantly after the > new process exited. > > /usr/local/etc/rc.d/nginx upgrade > Performing sanity check on nginx configuration: > nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok > nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful > Upgrading nginx binary: > Stopping old binary: > > In the default nginx log we had: > 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: > Address already in use) > nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) > 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: > Address already in use) > nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) > 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: > Address already in use) > nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) > 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: > Address already in use) > nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) > 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: > Address already in use) > nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) > 2016/12/04 21:18:22 [emerg] 25435#0: still could not bind() > nginx: [emerg] still could not bind() > > Running the start just after resulted in a running version but is > obviously unexpected to have upgrade result in a failure. > > I believe the change to add a localhost bind to the server in question > was relatively recent so I suspect it has something to do with that. > > The config for this is simply: > server { > listen 127.0.0.1:81; > server_name localhost; > > location /status { > stub_status; > } > } > > The upgrade in this case was: > nginx: 1.10.1_1,2 -> 1.10.2_2,2 > > Now this server is running under FreeBSD in a jail (10.2-RELEASE) and it > has 127.0.0.1 available yet it seems nginx has incorrectly bound the > address: > netstat -na | grep LIST | grep 81 > tcp4 0 0 10.10.96.146.81 *.* LISTEN In a FreeBSD jail with a single IP address any listening address is implicitly converted to the jail address. As a result, if you write in config "127.0.0.1" - upgrade won't work, as it will see inherited socket listening on the jail address (10.10.96.146 in your case) and will try to create a new listening socket with the address from the configuration and this will fail. There are two possible solutions for this problem: - configure listening on the jail IP address to avoid this implicit conversion; - configure listening on "*" and use multiple addresses in the jail. In both cases there will be no implicit conversion and as a result everything will work correctly. -- Maxim Dounin http://nginx.org/ From steven.hartland at multiplay.co.uk Mon Dec 5 14:40:27 2016 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Mon, 5 Dec 2016 14:40:27 +0000 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <20161205132706.GF18639@mdounin.ru> References: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> <20161205132706.GF18639@mdounin.ru> Message-ID: On 05/12/2016 13:27, Maxim Dounin wrote: > Hello! > > On Sun, Dec 04, 2016 at 09:39:59PM +0000, Steven Hartland wrote: > >> We've used nginx for years and never had an issue with nginx upgrade >> until today where the upgrade command ran but almost instantly after the >> new process exited. >> >> /usr/local/etc/rc.d/nginx upgrade >> Performing sanity check on nginx configuration: >> nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok >> nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful >> Upgrading nginx binary: >> Stopping old binary: >> >> In the default nginx log we had: >> 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: >> Address already in use) >> nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) >> 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: >> Address already in use) >> nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) >> 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: >> Address already in use) >> nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) >> 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: >> Address already in use) >> nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) >> 2016/12/04 21:18:22 [emerg] 25435#0: bind() to 127.0.0.1:81 failed (48: >> Address already in use) >> nginx: [emerg] bind() to 127.0.0.1:81 failed (48: Address already in use) >> 2016/12/04 21:18:22 [emerg] 25435#0: still could not bind() >> nginx: [emerg] still could not bind() >> >> Running the start just after resulted in a running version but is >> obviously unexpected to have upgrade result in a failure. >> >> I believe the change to add a localhost bind to the server in question >> was relatively recent so I suspect it has something to do with that. >> >> The config for this is simply: >> server { >> listen 127.0.0.1:81; >> server_name localhost; >> >> location /status { >> stub_status; >> } >> } >> >> The upgrade in this case was: >> nginx: 1.10.1_1,2 -> 1.10.2_2,2 >> >> Now this server is running under FreeBSD in a jail (10.2-RELEASE) and it >> has 127.0.0.1 available yet it seems nginx has incorrectly bound the >> address: >> netstat -na | grep LIST | grep 81 >> tcp4 0 0 10.10.96.146.81 *.* LISTEN > In a FreeBSD jail with a single IP address any listening address > is implicitly converted to the jail address. As a result, if you > write in config "127.0.0.1" - upgrade won't work, as it will see > inherited socket listening on the jail address (10.10.96.146 in > your case) and will try to create a new listening socket with the > address from the configuration and this will fail. Thanks for the response Maxim. In our case we don't have a single IP in the jail we have 4 addresses: 1 x localhost address (127.0.0.1) 2 x external 1 x private address (10.10.96.146) We have a number of binds the externals are just port binds the internal a localhost e.g. listen 443 default_server accept_filter=httpready ssl; listen 80 default_server accept_filter=httpready; ... listen 80; listen 443 ssl; ... listen 127.0.0.1:81; We're expecting the none IP specified listens to bind to * (this is what happens) But the "listen 127.0.0.1:81" results in "10.10.96.146:81" instead. Given your description I would only expect this 127.0.0.1 wasn't present in the jail and 10.10.96.146 was the only IP available. Did I miss-understand your description? > There are two possible solutions for this problem: > > - configure listening on the jail IP address to avoid this > implicit conversion; As above I'm not sure I follow you correctly as 127.0.0.1 is one of the IP's available in the jail. > - configure listening on "*" and use multiple addresses in the jail. Unfortunately this is something we want to explicitly prevent with this bind, its an internal service only. > In both cases there will be no implicit conversion and as a result > everything will work correctly. Regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 5 17:12:06 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Dec 2016 20:12:06 +0300 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: References: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> <20161205132706.GF18639@mdounin.ru> Message-ID: <20161205171206.GJ18639@mdounin.ru> Hello! On Mon, Dec 05, 2016 at 02:40:27PM +0000, Steven Hartland wrote: > On 05/12/2016 13:27, Maxim Dounin wrote: > > Hello! > > > > On Sun, Dec 04, 2016 at 09:39:59PM +0000, Steven Hartland wrote: [...] > >> I believe the change to add a localhost bind to the server in question > >> was relatively recent so I suspect it has something to do with that. > >> > >> The config for this is simply: > >> server { > >> listen 127.0.0.1:81; > >> server_name localhost; > >> > >> location /status { > >> stub_status; > >> } > >> } > >> > >> The upgrade in this case was: > >> nginx: 1.10.1_1,2 -> 1.10.2_2,2 > >> > >> Now this server is running under FreeBSD in a jail (10.2-RELEASE) and it > >> has 127.0.0.1 available yet it seems nginx has incorrectly bound the > >> address: > >> netstat -na | grep LIST | grep 81 > >> tcp4 0 0 10.10.96.146.81 *.* LISTEN > > In a FreeBSD jail with a single IP address any listening address > > is implicitly converted to the jail address. As a result, if you > > write in config "127.0.0.1" - upgrade won't work, as it will see > > inherited socket listening on the jail address (10.10.96.146 in > > your case) and will try to create a new listening socket with the > > address from the configuration and this will fail. > > Thanks for the response Maxim. > > In our case we don't have a single IP in the jail we have 4 addresses: > 1 x localhost address (127.0.0.1) > 2 x external > 1 x private address (10.10.96.146) > > We have a number of binds the externals are just port binds the internal > a localhost e.g. > listen 443 default_server accept_filter=httpready ssl; > listen 80 default_server accept_filter=httpready; > ... > listen 80; > listen 443 ssl; > ... > listen 127.0.0.1:81; > > We're expecting the none IP specified listens to bind to * (this is what > happens) > > But the "listen 127.0.0.1:81" results in "10.10.96.146:81" instead. > > Given your description I would only expect this 127.0.0.1 wasn't present > in the jail and 10.10.96.146 was the only IP available. > > Did I miss-understand your description? Given that the real local address of the listening socket as shown by netstat is 10.10.96.146, it means that the socket was created when there were no explicit 127.0.0.1 in the jail. And, given that you are able to connect to it via "lwp-request http://127.0.0.1:81/status", it looks like that 127.0.0.1 is still not in the jail, but mapped to 10.10.96.146 instead. Note that the fact that you can use 127.0.0.1 in a jail doesn't mean that it is a real address available. Normally, 127.0.0.1 will be implicitly converted to the main IP of the jail, and most utilities won't notice. (Note well that since there is no real 127.0.0.1 in the jail, it doesn't provide any additional isolation compared to the jail IP address. That is, a service which is listening on 127.0.0.1 is in fact listening on 10.10.96.146, and it is reachable from anywhere, not just the jail itself.) > > There are two possible solutions for this problem: > > > > - configure listening on the jail IP address to avoid this > > implicit conversion; > As above I'm not sure I follow you correctly as 127.0.0.1 is one of the > IP's available in the jail. See above, looks like it's not, and it is implicitly converted to 10.10.96.146 instead. -- Maxim Dounin http://nginx.org/ From steven.hartland at multiplay.co.uk Mon Dec 5 18:39:14 2016 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Mon, 5 Dec 2016 18:39:14 +0000 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <20161205171206.GJ18639@mdounin.ru> References: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> <20161205132706.GF18639@mdounin.ru> <20161205171206.GJ18639@mdounin.ru> Message-ID: On 05/12/2016 17:12, Maxim Dounin wrote: > Hello! > > On Mon, Dec 05, 2016 at 02:40:27PM +0000, Steven Hartland wrote: snip... > Given that the real local address of the listening socket as shown > by netstat is 10.10.96.146, it means that the socket was created > when there were no explicit 127.0.0.1 in the jail. This didn't appear to be the case as nginx was restarted after the failure of upgrade and currently shows: netstat -na | grep LIST tcp4 0 0 10.10.96.146.81 *.* LISTEN The jail does indeed have an explicit 127.0.0.1 as reported by ifconfig from within said jail. ifconfig lo0 lo0: flags=8049 metric 0 mtu 16384 options=600003 inet 127.0.0.1 netmask 0xffffffff /etc/jail.conf includes: jailXYZ { path = "/data/jails/XYZ"; ip4.addr = "10.10.96.146"; ip4.addr += "vlan96|A.B.C.D"; ip4.addr += "lo0|127.0.0.1"; } This is what we see when 127.0.0.1 is not exposed to the jail, which is where I would expect the behaviour you describe: ifconfig lo0 lo0: flags=8049 metric 0 mtu 16384 options=600003 groups: lo Digging into to source of jails I found the offending code: ia0.s_addr = ntohl(ia->s_addr); if (ia0.s_addr == INADDR_LOOPBACK) { ia->s_addr = pr->pr_ip4[0].s_addr; mtx_unlock(&pr->pr_mtx); return (0); } ... if (ntohl(ia->s_addr) == INADDR_LOOPBACK) { ia->s_addr = pr->pr_ip4[0].s_addr; mtx_unlock(&pr->pr_mtx); return (0); } This uses the first IP of the jail as loopback even if there is an address which explicitly matches. So the workaround would be to change the order of the IP's in our jail config making 127.0.0.1 the first IP. However this doesn't seem to be documented in jail man page so quite possibly needs fixing. Thanks for pointing me in the right direction. I'll talk to the jail / net guys and get that fixed. At the very least it should be clearly documented in JAIL(8) but ideally it should do the right thing when the jail has an address which matches INADDR_LOOPBACK. Regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Dec 5 19:40:02 2016 From: nginx-forum at forum.nginx.org (fguerraz) Date: Mon, 05 Dec 2016 14:40:02 -0500 Subject: keepalive ignored for some upstreams Message-ID: <272c4e8329db566c007ce0b2e8284979.NginxMailingListEnglish@forum.nginx.org> Hi, I use an nginix server with multiple servers each with a matching upstream. They are all in individual files in sites-enabled (ubuntu server). Here is an example of such a config file: upstream xxxxx.com { server xxxxx.com:12908; keepalive 65; } server { listen 12908; resolver x.y.z.a x.y.z.b; server_name xxxxx.com; access_log /var/log/nginx/transparent_proxy-access.log combined; error_log /var/log/nginx/transparent_proxy-error.log; location = /health { return 200; access_log off; } location / { proxy_pass http://$http_host$request_uri; proxy_pass_request_headers on; proxy_read_timeout 65; proxy_connect_timeout 65; proxy_http_version 1.1; proxy_set_header Connection ""; } } for this particular "server", if I set debug in the error_log, I never see "init keepalive peer" or anything of sort indicating that keepalive is being taken into account and I *do* see that on other "servers". The other configuration files for different servers only differ by server_name, server and upstream names and by which port they use, the config files are auto-generated so I am confident that this is the only difference. I have used tcpdump to check what nginx was doing and it's closing the connection after each response from the upstream server. Do you see any reason why keepalive would be ignored for this configuration? The only difference with the others that work being that it uses a non standard port. nginx version: nginx/1.10.0 (Ubuntu), I also tried the latest development ppa. Best regards, Fran?ois. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271359,271359#msg-271359 From yrcompute at gmail.com Tue Dec 6 05:01:09 2016 From: yrcompute at gmail.com (Yong Li) Date: Mon, 5 Dec 2016 21:01:09 -0800 Subject: POST method with x-accel-redirect Message-ID: Hi Nginx guru, I am a new nginx user working on implementing a protected upload and download service based on nginx. With an older version 1.4.6 I was able to make it work. But since we upgraded it to 1.10.1 it seems the POST method is changed to GET during the redirect. I saw Maxim had a post ( https://forum.nginx.org/read.php?2,263661,264440#msg-264440) describing this issue. But I am still not sure what does it mean by "x-accel-redirect to a named location". I tried wrapping the *file_server* section below into a named location, and *proxy_pass* to this named location inside my x-accel-redirect section (the second location block below), but it does not work for me. Could you give some example or instruction how to achieve this in my nginx config file? Or should I apply a pitch to fix this issue formally? upstream file_server { server ***.***.***.***:8888; server ***.***.***.***:8888; } server { .................... # This accepts file upload/download requests and passes the requests to my tomcat web server that handles authentication location /api/v1/files { proxy_pass http://tomcat; proxy_cache backcache; } # This is the internal uri path that the tomcat web server redirects to after authentication location ~ ^/(all|groups|channels|users) { internal; proxy_pass http://file_server; # This is my file server. I tried make this a named location, but didn't not work. } ...................... } Thanks a lot and best regards, - Yong Y&R Computing -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 6 17:43:50 2016 From: nginx-forum at forum.nginx.org (ahamilton9) Date: Tue, 06 Dec 2016 12:43:50 -0500 Subject: Fastcgi_pass, resolver, and validating functionality. Message-ID: I have an nginx server (1.10.1) configured that sends requests for PHP files to our PHP tier by directing that traffic via fastcgi_pass to our PHP ELB (AWS's Elastic Load Balancer). We occasionally get outages that a hard stop/start of nginx solves, but are having trouble narrowing the issue down. We believe it is related to DNS resolution, which brings me here. I have a few questions: 1) Does using a variable in fastcgi_pass actually allow the resolver to run, or is it just for proxy_pass as I've seen in 90% of examples? 2) Is this configuration valid? It WORKS, but the resolution doesn't seem to do anything, or I'm not sure how to check that it's updating. The server's resolv.conf points to the same DNS server and uses a search domain so "php:9000" does work: http { resolver x.x.x.x valid=10s; server { set $phproute "php:9000"; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass $phproute; } } } 3) How do I validate that the resolver is working properly outside of waiting for an outage again? Is there a way to get the current cached DNS entries from nginx to compare? I found a tcpdump command, but I'm not really sure what I'm looking at, and it usually gives me no data. Is there a better method? Thanks in advance. I'm really at a loss here. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271373,271373#msg-271373 From francis at daoine.org Tue Dec 6 20:23:10 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 6 Dec 2016 20:23:10 +0000 Subject: Fastcgi_pass, resolver, and validating functionality. In-Reply-To: References: Message-ID: <20161206202310.GD2958@daoine.org> On Tue, Dec 06, 2016 at 12:43:50PM -0500, ahamilton9 wrote: Hi there, > 1) Does using a variable in fastcgi_pass actually allow the resolver to run, > or is it just for proxy_pass as I've seen in 90% of examples? Observation suggests "yes". > 2) Is this configuration valid? It WORKS, but the resolution doesn't seem to > do anything, or I'm not sure how to check that it's updating. The server's > resolv.conf points to the same DNS server and uses a search domain so > "php:9000" does work: tcpdump -nn -i any host x.x.x.x and port 53 It might be easier if you pick an address that is *not* otherwise used, but this should be good enough to show that it works as you want. When I repeatedly try to access http://127.0.0.1/fake.php, I get my response (Bad Gateway, in this case, but that does not matter). The tcpdump shows queries for A? php. and AAAA? php., followed by a response of 1/0/0 A 127.0.0.12 (which is what my resolver is set to return). I see queries at 19:55:07, 19:55:18, and 19:55:29, and not in between. So nginx makes a dns request, does not make any more for 10 seconds, and then makes one the next time a request comes in. > 3) How do I validate that the resolver is working properly outside of > waiting for an outage again? Use a name-resolving tool of your choice -- dig, host, nslookup, others exist no doubt -- to query your resolver yourself. Note that because you say valid=10s, nginx will only issue a new query more than 10s after the previous successful query. So if the resolver has a problem, nginx will notice within 10 seconds and start returning HTTP/1.1 500 Internal Server Error until the resolver responds again. The "valid" time is a balance between "how short should I use the old value before learning that there is a new one", and "how long should I use the old value while the resolver breaks and recovers". > Is there a way to get the current cached DNS > entries from nginx to compare? I found a tcpdump command, but I'm not really > sure what I'm looking at, and it usually gives me no data. Is there a better > method? I don't know of a way to get the nginx value, other than to "tcpdump" and see what ip address nginx is trying to access on port 9000 -- that will be the value that it is currently using -- or checking the debug log for the "name was resolved to" or "connect to" lines. Or check the error log for the "connect" line if the connect failed. All of those are indirect. tcpdump is probably easiest; but you could use a "debug_connection 127.0.0.100;" and then make your test to that address, hoping that most other people will not, and then look for the debug log lines after each time you test. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Dec 6 20:46:03 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 6 Dec 2016 20:46:03 +0000 Subject: POST method with x-accel-redirect In-Reply-To: References: Message-ID: <20161206204603.GE2958@daoine.org> On Mon, Dec 05, 2016 at 09:01:09PM -0800, Yong Li wrote: Hi there, > But since we upgraded it to 1.10.1 it seems the POST method > is changed to GET during the redirect. I saw Maxim had a post ( > https://forum.nginx.org/read.php?2,263661,264440#msg-264440) describing > this issue. But I am still not sure what does it mean by "x-accel-redirect > to a named location". > location /api/v1/files { > proxy_pass http://tomcat; > } > location ~ ^/(all|groups|channels|users) { > internal; > proxy_pass http://file_server; > } If I POST to /api/v1/files/two, nginx will proxy_pass a POST to http://tomcat/api/v1/files/two Tomcat will return HTTP 200 with a header X-Accel-Redirect with something like /all/two. Then your nginx will proxy_pass a GET to http://file_server/all/two. I think that the suggestion is that if your tomcat instead returns HTTP 200 with a header X-Accel-Redirect of @fileserver, then in your (new) "location @fileserver", a proxy_pass would be a POST to http://file_server/api/v1/files/two. Does that help in the design of your solution? f -- Francis Daly francis at daoine.org From yrcompute at gmail.com Tue Dec 6 21:12:36 2016 From: yrcompute at gmail.com (Yong Li) Date: Tue, 6 Dec 2016 13:12:36 -0800 Subject: POST method with x-accel-redirect In-Reply-To: <20161206204603.GE2958@daoine.org> References: <20161206204603.GE2958@daoine.org> Message-ID: Hi Francis, Yeah, I think this is what I need! Thanks a lot for your explanation, which definitely helps my design. Best, - Yong - Yong Y&R Computing On Tue, Dec 6, 2016 at 12:46 PM, Francis Daly wrote: > On Mon, Dec 05, 2016 at 09:01:09PM -0800, Yong Li wrote: > > Hi there, > > > But since we upgraded it to 1.10.1 it seems the POST method > > is changed to GET during the redirect. I saw Maxim had a post ( > > https://forum.nginx.org/read.php?2,263661,264440#msg-264440) describing > > this issue. But I am still not sure what does it mean by > "x-accel-redirect > > to a named location". > > > > location /api/v1/files { > > proxy_pass http://tomcat; > > } > > location ~ ^/(all|groups|channels|users) { > > internal; > > proxy_pass http://file_server; > > } > > If I POST to /api/v1/files/two, nginx will proxy_pass a POST to > http://tomcat/api/v1/files/two > > Tomcat will return HTTP 200 with a header X-Accel-Redirect with > something like /all/two. Then your nginx will proxy_pass a GET to > http://file_server/all/two. > > > I think that the suggestion is that if your tomcat instead returns > HTTP 200 with a header X-Accel-Redirect of @fileserver, then in > your (new) "location @fileserver", a proxy_pass would be a POST to > http://file_server/api/v1/files/two. > > > Does that help in the design of your solution? > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 7 07:02:32 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 02:02:32 -0500 Subject: nginx error : cache file has too long header Message-ID: <88cefabdb047782a6aa6c7de1ffd61b7.NginxMailingListEnglish@forum.nginx.org> Hi , We are getting too many Cache file has too long header errors in nginx error.log file. Some servers are getting such error as many as 273 times a day. Below is sample nginx file in our environment : worker_processes auto; events { worker_connections 4096; use epoll; multi_accept on; } worker_rlimit_nofile 100001; http { include mime.types; default_type video/mp4; proxy_buffering on; proxy_buffer_size 4096k; proxy_buffers 5 4096k; sendfile on; keepalive_timeout 30; tcp_nodelay on; tcp_nopush on; reset_timedout_connection on; gzip off; server_tokens off; log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" $request_time' ' Patna sptnacdnds01.cdnsrv.jio.com sptnacdnds01 DS'; proxy_cache_path /cache/11452 keys_zone=a11452:8m levels=1:2 max_size=50g inactive=10m; proxy_cache_path /cache/11506 keys_zone=a11506:8m levels=1:2 max_size=50g inactive=10m; proxy_cache_path /cache/12151 keys_zone=a12151:200m levels=1:2 max_size=100g inactive=10m; proxy_cache_path /cache/12053 keys_zone=a12053:200m levels=1:2 max_size=50g inactive=10m; proxy_cache_path /cache/11502 keys_zone=a11502:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/11503 keys_zone=a11503:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/11504 keys_zone=a11504:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/11505 keys_zone=a11505:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/11507 keys_zone=a11507:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12202 keys_zone=a12202:200m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12201 keys_zone=a12201:200m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12003 keys_zone=a12003:200m levels=1:2 max_size=700g inactive=10d; proxy_cache_path /cache/12008 keys_zone=a12008:300m levels=1:2 max_size=20g inactive=10d; proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12152 keys_zone=a12152:200m levels=1:2 max_size=700g inactive=10d; proxy_cache_path /cache/12005 keys_zone=a12005:200m levels=1:2 max_size=100g inactive=10d; proxy_cache_path /cache/12153 keys_zone=a12153:200m levels=1:2 max_size=100g inactive=10d; proxy_cache_path /cache/12006 keys_zone=a12006:200m levels=1:2 max_size=100g inactive=10d; proxy_cache_path /cache/11501 keys_zone=a11501:50m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12054 keys_zone=a12054:200m levels=1:2 max_size=500g inactive=3d; proxy_cache_path /cache/12251 keys_zone=a12251:200m levels=1:2 max_size=200g inactive=15m; proxy_cache_path /cache/12252 keys_zone=a12252:200m levels=1:2 max_size=200g inactive=10d; proxy_cache_path /cache/12301 keys_zone=a12301:200m levels=1:2 max_size=200g inactive=10d; Please suggest some solution. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271382#msg-271382 From arut at nginx.com Wed Dec 7 07:28:18 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 10:28:18 +0300 Subject: nginx error : cache file has too long header In-Reply-To: <88cefabdb047782a6aa6c7de1ffd61b7.NginxMailingListEnglish@forum.nginx.org> References: <88cefabdb047782a6aa6c7de1ffd61b7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161207072818.GC4627@Romans-MacBook-Air.local> Hi, On Wed, Dec 07, 2016 at 02:02:32AM -0500, omkar_jadhav_20 wrote: > Hi , > > We are getting too many Cache file has too long header errors in nginx > error.log file. Some servers are getting such error as many as 273 times a > day. You should increase proxy_buffer_size. > Below is sample nginx file in our environment : > worker_processes auto; > events { > worker_connections 4096; > use epoll; > multi_accept on; > } > worker_rlimit_nofile 100001; > http { > include mime.types; > default_type video/mp4; > proxy_buffering on; > proxy_buffer_size 4096k; > proxy_buffers 5 4096k; > sendfile on; > keepalive_timeout 30; > tcp_nodelay on; > tcp_nopush on; > reset_timedout_connection on; > gzip off; > server_tokens off; > log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' > '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" $request_time' ' Patna > sptnacdnds01.cdnsrv.jio.com sptnacdnds01 DS'; > proxy_cache_path /cache/11452 keys_zone=a11452:8m levels=1:2 max_size=50g > inactive=10m; > proxy_cache_path /cache/11506 keys_zone=a11506:8m levels=1:2 max_size=50g > inactive=10m; > proxy_cache_path /cache/12151 keys_zone=a12151:200m levels=1:2 > max_size=100g inactive=10m; > proxy_cache_path /cache/12053 keys_zone=a12053:200m levels=1:2 > max_size=50g inactive=10m; > proxy_cache_path /cache/11502 keys_zone=a11502:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/11503 keys_zone=a11503:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/11504 keys_zone=a11504:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/11505 keys_zone=a11505:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/11507 keys_zone=a11507:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12202 keys_zone=a12202:200m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12201 keys_zone=a12201:200m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12003 keys_zone=a12003:200m levels=1:2 > max_size=700g inactive=10d; > proxy_cache_path /cache/12008 keys_zone=a12008:300m levels=1:2 > max_size=20g inactive=10d; > proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12152 keys_zone=a12152:200m levels=1:2 > max_size=700g inactive=10d; > proxy_cache_path /cache/12005 keys_zone=a12005:200m levels=1:2 > max_size=100g inactive=10d; > proxy_cache_path /cache/12153 keys_zone=a12153:200m levels=1:2 > max_size=100g inactive=10d; > proxy_cache_path /cache/12006 keys_zone=a12006:200m levels=1:2 > max_size=100g inactive=10d; > proxy_cache_path /cache/11501 keys_zone=a11501:50m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12054 keys_zone=a12054:200m levels=1:2 > max_size=500g inactive=3d; > proxy_cache_path /cache/12251 keys_zone=a12251:200m levels=1:2 > max_size=200g inactive=15m; > proxy_cache_path /cache/12252 keys_zone=a12252:200m levels=1:2 > max_size=200g inactive=10d; > proxy_cache_path /cache/12301 keys_zone=a12301:200m levels=1:2 > max_size=200g inactive=10d; > > > Please suggest some solution. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271382#msg-271382 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From shahzaib.cb at gmail.com Wed Dec 7 07:56:49 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 7 Dec 2016 12:56:49 +0500 Subject: Bypass specific host in rate_limit !! Message-ID: Hi, Hopes you guys are doing great. Currrently, we're looking to activate rate_limit for our mp4 traffic the only confusion is if we can bypass some ip/host from rate_limit ? Thanks in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 7 10:24:23 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 05:24:23 -0500 Subject: nginx error : cache file has too long header In-Reply-To: <20161207072818.GC4627@Romans-MacBook-Air.local> References: <20161207072818.GC4627@Romans-MacBook-Air.local> Message-ID: <829bdaffca23ef08e316fa66fde40cab.NginxMailingListEnglish@forum.nginx.org> Please note that we already have proxy_buffer_size 4096k, please suggest what should be ideal size for proxy_buffer_size in this scenario. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271389#msg-271389 From arut at nginx.com Wed Dec 7 10:41:22 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 13:41:22 +0300 Subject: nginx error : cache file has too long header In-Reply-To: <829bdaffca23ef08e316fa66fde40cab.NginxMailingListEnglish@forum.nginx.org> References: <20161207072818.GC4627@Romans-MacBook-Air.local> <829bdaffca23ef08e316fa66fde40cab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161207104122.GE4627@Romans-MacBook-Air.local> On Wed, Dec 07, 2016 at 05:24:23AM -0500, omkar_jadhav_20 wrote: > Please note that we already have proxy_buffer_size 4096k, please suggest > what should be ideal size for proxy_buffer_size in this scenario. Yes, I saw that. But it looks like you have too long proxy_cache_key. Or maybe, those files producing errors are just broken for some reason. Can you read those files? Each cache file has the following parts in it: - Serialized struct ngx_http_file_cache_header_t (unreadable characters in the beginnig of the file). This one is small. - Cache key in plaintext. You can easily see, if it's really long. - Response header - Response body -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed Dec 7 11:00:34 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 06:00:34 -0500 Subject: nginx error : cache file has too long header In-Reply-To: <20161207104122.GE4627@Romans-MacBook-Air.local> References: <20161207104122.GE4627@Romans-MacBook-Air.local> Message-ID: <91b4b2ffffe33484a77bd21acdb5da6f.NginxMailingListEnglish@forum.nginx.org> I can open and read such cache files and below things are there inside these files : KEY: (this is one liner key) HTTP/1.1 200 OK x-mobi-fs-ver: X-Frame-Options: Cache-Control: Last-Modified: ETag: Content-Type: Content-Length: Accept-Ranges: etc. and at last a very long binary content. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271392#msg-271392 From arut at nginx.com Wed Dec 7 11:06:58 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 14:06:58 +0300 Subject: nginx error : cache file has too long header In-Reply-To: <91b4b2ffffe33484a77bd21acdb5da6f.NginxMailingListEnglish@forum.nginx.org> References: <20161207104122.GE4627@Romans-MacBook-Air.local> <91b4b2ffffe33484a77bd21acdb5da6f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161207110658.GA35892@Romans-MacBook-Air.local> On Wed, Dec 07, 2016 at 06:00:34AM -0500, omkar_jadhav_20 wrote: > I can open and read such cache files and below things are there inside these > files : > > KEY: (this is one liner key) > HTTP/1.1 200 OK > x-mobi-fs-ver: > X-Frame-Options: > Cache-Control: > Last-Modified: > ETag: > Content-Type: > Content-Length: > Accept-Ranges: > > etc. and at last a very long binary content. Is this one of the files, which produce 'too long header' errors? Is the file offset of the point, where the body starts, greater than 4096? That part of the cache file (file header + key + HTTP header) should fit in proxy_buffer_size. -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed Dec 7 11:34:29 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 06:34:29 -0500 Subject: nginx error : cache file has too long header In-Reply-To: <20161207110658.GA35892@Romans-MacBook-Air.local> References: <20161207110658.GA35892@Romans-MacBook-Air.local> Message-ID: <717a659b0a8a1a746805c0204f0baa21.NginxMailingListEnglish@forum.nginx.org> Yes this is one of those cache file for which we received said error. I can see total size of this file is just 5300 bytes however we have set proxy_buffer size set to 4096k ls -lrt /cache/12054/1/fd/54ab395a128225b98118b08cf9d89fd1 x 5300 Dec 7 16:21 /cache/12054/1/fd/54ab395a128225b98118b08cf9d89fd1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271394#msg-271394 From arut at nginx.com Wed Dec 7 12:13:19 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 15:13:19 +0300 Subject: nginx error : cache file has too long header In-Reply-To: <717a659b0a8a1a746805c0204f0baa21.NginxMailingListEnglish@forum.nginx.org> References: <20161207110658.GA35892@Romans-MacBook-Air.local> <717a659b0a8a1a746805c0204f0baa21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161207121319.GC35892@Romans-MacBook-Air.local> On Wed, Dec 07, 2016 at 06:34:29AM -0500, omkar_jadhav_20 wrote: > Yes this is one of those cache file for which we received said error. > I can see total size of this file is just 5300 bytes however we have set > proxy_buffer size set to 4096k > ls -lrt /cache/12054/1/fd/54ab395a128225b98118b08cf9d89fd1 > x 5300 Dec 7 16:21 /cache/12054/1/fd/54ab395a128225b98118b08cf9d89fd1 Could you post hexdump -C of the file header (what's before the key) ? -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed Dec 7 12:21:01 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 07:21:01 -0500 Subject: nginx error : cache file has too long header In-Reply-To: <20161207121319.GC35892@Romans-MacBook-Air.local> References: <20161207121319.GC35892@Romans-MacBook-Air.local> Message-ID: Please find the below : 00000000 fe eb 47 58 00 00 00 00 f3 6f 47 58 00 00 00 00 |..GX.....oGX....| 00000010 a6 e9 47 58 00 00 00 00 e9 9a 8c e8 00 00 a6 00 |..GX............| 00000020 1f 02 00 00 00 00 00 00 0a 4b 45 59 3a 20 31 30 |.........KEY: 10| Please let me know if anything else is required. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271397#msg-271397 From arut at nginx.com Wed Dec 7 12:31:58 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 15:31:58 +0300 Subject: nginx error : cache file has too long header In-Reply-To: References: <20161207121319.GC35892@Romans-MacBook-Air.local> Message-ID: <20161207123158.GD35892@Romans-MacBook-Air.local> On Wed, Dec 07, 2016 at 07:21:01AM -0500, omkar_jadhav_20 wrote: > Please find the below : > 00000000 fe eb 47 58 00 00 00 00 f3 6f 47 58 00 00 00 00 > |..GX.....oGX....| > 00000010 a6 e9 47 58 00 00 00 00 e9 9a 8c e8 00 00 a6 00 > |..GX............| > 00000020 1f 02 00 00 00 00 00 00 0a 4b 45 59 3a 20 31 30 |.........KEY: > 10| > > Please let me know if anything else is required. What nginx version do you have? -- Roman Arutyunyan From francis at daoine.org Wed Dec 7 12:36:27 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Dec 2016 12:36:27 +0000 Subject: Bypass specific host in rate_limit !! In-Reply-To: References: Message-ID: <20161207123627.GF2958@daoine.org> On Wed, Dec 07, 2016 at 12:56:49PM +0500, shahzaib mushtaq wrote: Hi there, > Hopes you guys are doing great. Currrently, we're looking to activate > rate_limit for our mp4 traffic the only confusion is if we can bypass some > ip/host from rate_limit ? If you mean the "limit_rate" directive, then the documentation at http://nginx.org/r/limit_rate suggests that yes, you can. f -- Francis Daly francis at daoine.org From arut at nginx.com Wed Dec 7 12:44:09 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2016 15:44:09 +0300 Subject: nginx error : cache file has too long header In-Reply-To: <20161207123158.GD35892@Romans-MacBook-Air.local> References: <20161207121319.GC35892@Romans-MacBook-Air.local> <20161207123158.GD35892@Romans-MacBook-Air.local> Message-ID: <20161207124409.GE35892@Romans-MacBook-Air.local> On Wed, Dec 07, 2016 at 03:31:58PM +0300, Roman Arutyunyan wrote: > On Wed, Dec 07, 2016 at 07:21:01AM -0500, omkar_jadhav_20 wrote: > > Please find the below : > > 00000000 fe eb 47 58 00 00 00 00 f3 6f 47 58 00 00 00 00 > > |..GX.....oGX....| > > 00000010 a6 e9 47 58 00 00 00 00 e9 9a 8c e8 00 00 a6 00 > > |..GX............| > > 00000020 1f 02 00 00 00 00 00 00 0a 4b 45 59 3a 20 31 30 |.........KEY: > > 10| > > > > Please let me know if anything else is required. > > What nginx version do you have? I see, the cache file is from an old nginx version. Anyway, the file seems to be ok. -- Roman Arutyunyan From shahzaib.cb at gmail.com Wed Dec 7 12:55:23 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 7 Dec 2016 17:55:23 +0500 Subject: Bypass specific host in rate_limit !! In-Reply-To: <20161207123627.GF2958@daoine.org> References: <20161207123627.GF2958@daoine.org> Message-ID: Hi, Thank you, so it means we can use $limit_rate with map module or something ? Is there any example with limit_rate you can direct me to ? Thanks again for help !! On Wed, Dec 7, 2016 at 5:36 PM, Francis Daly wrote: > On Wed, Dec 07, 2016 at 12:56:49PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > Hopes you guys are doing great. Currrently, we're looking to activate > > rate_limit for our mp4 traffic the only confusion is if we can bypass > some > > ip/host from rate_limit ? > > If you mean the "limit_rate" directive, then the documentation at > http://nginx.org/r/limit_rate suggests that yes, you can. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 7 13:08:11 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 07 Dec 2016 08:08:11 -0500 Subject: nginx error : cache file has too long header In-Reply-To: <20161207121319.GC35892@Romans-MacBook-Air.local> References: <20161207121319.GC35892@Romans-MacBook-Air.local> Message-ID: nginx-1.4.0 is the version but there are many servers on which we have installed nginx with version nginx-1.10.2 , giving same error: below is hexdump output before KEY from those servers on which nginx with version 1.10.2 is running : 00000000 03 00 00 00 00 00 00 00 a8 09 48 58 00 00 00 00 |..........HX....| 00000010 99 ee 47 58 00 00 00 00 50 07 48 58 00 00 00 00 |..GX....P.HX....| 00000020 6d 50 49 e8 00 00 d6 00 65 02 22 22 63 37 32 36 |mPI.....e.""c726| 00000030 64 30 35 38 39 35 37 61 34 62 32 36 62 31 35 62 |d058957a4b26b15b| 00000040 35 32 31 64 31 66 65 64 31 65 36 33 22 00 00 00 |521d1fed1e63"...| 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000090 0a 4b 45 59 3a 20 68 74 74 70 3a 2f 2f 4c 49 56 |.KEY: http://LIV| ------------------------------------- actual size of the cache file - ls -lh /cache/12053/c/71/be411798f851da373230d41a50d5971c 458K Dec 7 18:27 /cache/12053/c/71/be411798f851da373230d41a50d5971c Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271402#msg-271402 From black.fledermaus at arcor.de Wed Dec 7 16:15:20 2016 From: black.fledermaus at arcor.de (basti) Date: Wed, 7 Dec 2016 17:15:20 +0100 Subject: Disable output_buffering in nginx config Message-ID: Hallo, for one application i need to disable output_buffering in php.ini. In my opinion there is a better solution to disable it for the special location and not for system wide php. So i try to set fastcgi_param PHP_VALUE "output_buffering=Off"; but when i reload nginx and look at the app/ or on my phpinfo side the option is still set to 4096 (default). has anybody a solution? Best regards From algermissen1971 at icloud.com Wed Dec 7 20:02:39 2016 From: algermissen1971 at icloud.com (Jan Algermissen) Date: Wed, 07 Dec 2016 21:02:39 +0100 Subject: How to upstream request set header based on SSI variable? Message-ID: <955997D0-378C-4EF1-B700-2EF2C66201B1@icloud.com> Hi, I have setup SSI to include responses from upstream services. I am trying to find a way to set additional headers for these upstream requests based on a variable set in the SSI page but I am not sure if that even works at all. Does someone know *if* this is doable and if so, how? In config I have server { set $etag "xxxxx"; location /up1/ { proxy_set_header If-Match $etag; proxy_pass http://upstream1/up1/; } } This sends an If-Match: xxxxx header to the upstream, but what I want to achieve is to send the "123456" instead. I'd also be glad for suggestions what I might try to get this to work. Jan From nginx-forum at forum.nginx.org Wed Dec 7 20:39:11 2016 From: nginx-forum at forum.nginx.org (ahamilton9) Date: Wed, 07 Dec 2016 15:39:11 -0500 Subject: Fastcgi_pass, resolver, and validating functionality. In-Reply-To: <20161206202310.GD2958@daoine.org> References: <20161206202310.GD2958@daoine.org> Message-ID: <45d789df237689eb1ebd7ee920ae3d9f.NginxMailingListEnglish@forum.nginx.org> I've tried recreating the configuration on my local machine, and it works just fine. Really confusing. I don't understand why I get absolutely no DNS traffic on my nginx server (beyond the few calls from other services). Not a single call from nginx. Also, I read in the documentation that "By default, nginx caches answers using the TTL value of a response", and then everywhere else I'm told it doesn't. We never had a resolver before I updated to 1.10, and it worked fine for years. Another mailing list post mentions that the issue of TTLs being ignored would be "fixed" way back in version 1.1: https://forum.nginx.org/read.php?2,217468,217468#msg-217468 Did it get removed or put into nginx Plus only by version 1.10? I'm still not sure whats going on, and from my month+ of research, it seems I'm the only one having this problem... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271373,271420#msg-271420 From mdounin at mdounin.ru Thu Dec 8 13:47:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 8 Dec 2016 16:47:18 +0300 Subject: How to upstream request set header based on SSI variable? In-Reply-To: <955997D0-378C-4EF1-B700-2EF2C66201B1@icloud.com> References: <955997D0-378C-4EF1-B700-2EF2C66201B1@icloud.com> Message-ID: <20161208134718.GF18639@mdounin.ru> Hello! On Wed, Dec 07, 2016 at 09:02:39PM +0100, Jan Algermissen wrote: > Hi, > > I have setup SSI to include responses from upstream services. > > I am trying to find a way to set additional headers for these upstream > requests based on a variable set in the SSI page but I am not sure if > that even works at all. > > Does someone know *if* this is doable and if so, how? > > > In config I have > > server { > set $etag "xxxxx"; > > location /up1/ { > proxy_set_header If-Match $etag; > proxy_pass http://upstream1/up1/; > } > } > > > > > > > > This sends an > > If-Match: xxxxx > > header to the upstream, but what I want to achieve is to send the > "123456" instead. > > I'd also be glad for suggestions what I might try to get this to work. The approach you are using won't work, as variables set via the SSI "set" command are local to SSI and can be only accessed in SSI. An alternative solution would be to use appropriate value in URI of the "include" command, with appropriate URI change later: location /up1/ { set $etag $args; set $args ""; proxy_set_header If-Match $etag; proxy_pass http://upstream1; } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 8 13:57:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 8 Dec 2016 16:57:43 +0300 Subject: Fastcgi_pass, resolver, and validating functionality. In-Reply-To: <45d789df237689eb1ebd7ee920ae3d9f.NginxMailingListEnglish@forum.nginx.org> References: <20161206202310.GD2958@daoine.org> <45d789df237689eb1ebd7ee920ae3d9f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161208135743.GG18639@mdounin.ru> Hello! On Wed, Dec 07, 2016 at 03:39:11PM -0500, ahamilton9 wrote: > I've tried recreating the configuration on my local machine, and it works > just fine. Really confusing. I don't understand why I get absolutely no DNS > traffic on my nginx server (beyond the few calls from other services). Not a > single call from nginx. The behaviour you describe suggests that the reason for what you see is an upstream explicitly or implicitly configured elsewhere. That is, look for something like fastcgi_pass php:9000; elsewhere in your config. -- Maxim Dounin http://nginx.org/ From algermissen1971 at icloud.com Thu Dec 8 14:12:55 2016 From: algermissen1971 at icloud.com (Jan Algermissen) Date: Thu, 08 Dec 2016 15:12:55 +0100 Subject: How to upstream request set header based on SSI variable? In-Reply-To: <20161208134718.GF18639@mdounin.ru> References: <955997D0-378C-4EF1-B700-2EF2C66201B1@icloud.com> <20161208134718.GF18639@mdounin.ru> Message-ID: <01171EB3-D00A-4E2A-9AD3-18EF04CDA702@icloud.com> On 8 Dec 2016, at 14:47, Maxim Dounin wrote: > Hello! > > On Wed, Dec 07, 2016 at 09:02:39PM +0100, Jan Algermissen wrote: > >> Hi, >> >> I have setup SSI to include responses from upstream services. >> >> I am trying to find a way to set additional headers for these upstream >> requests based on a variable set in the SSI page but I am not sure if >> that even works at all. >> >> Does someone know *if* this is doable and if so, how? >> >> >> In config I have >> >> server { >> set $etag "xxxxx"; >> >> location /up1/ { >> proxy_set_header If-Match $etag; >> proxy_pass http://upstream1/up1/; >> } >> } >> >> >> >> >> >> >> >> This sends an >> >> If-Match: xxxxx >> >> header to the upstream, but what I want to achieve is to send the >> "123456" instead. >> >> I'd also be glad for suggestions what I might try to get this to work. > > The approach you are using won't work, as variables set via the > SSI "set" command are local to SSI and can be only accessed in > SSI. Yes, I suspected that. > > An alternative solution would be to use appropriate value in URI > of the "include" command, with appropriate URI change later: > > > > location /up1/ { > set $etag $args; > set $args ""; > proxy_set_header If-Match $etag; > proxy_pass http://upstream1; > } Clever, thanks - works just fine. Jan > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Dec 9 13:38:45 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Fri, 09 Dec 2016 08:38:45 -0500 Subject: nginx error : cache file has too long header In-Reply-To: References: <20161207121319.GC35892@Romans-MacBook-Air.local> Message-ID: <11de409418c6073ddc45fbf1e8225a51.NginxMailingListEnglish@forum.nginx.org> can someone please assist here ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271382,271439#msg-271439 From nginx at 2xlp.com Fri Dec 9 23:29:04 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Fri, 9 Dec 2016 18:29:04 -0500 Subject: can't replicate/block portscanner Message-ID: <21B0858B-9D58-42B8-9EA4-78D365810F3F@2xlp.com> I got hit with a portscanner a few minutes ago, which caused an edge-case I can't repeat. the access log looks like this: 94.102.48.193 - [09/Dec/2016:22:15:03 +0000][_] 500 "GET / HTTP/1.0" 10299 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" "-" cookies="-" the server block was: server { listen 80 default_server; server_name _; ... } but there is another ip block: server { listen 80; server_name ~^[0-9.]*$; } i can't figure out how to duplicate this request. the 500 was triggered, because the upstream application server didn't get find a "HTTP_HOST" environment variable set up, and i'd like to protect against this. From rpaprocki at fearnothingproductions.net Sat Dec 10 00:09:21 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 9 Dec 2016 16:09:21 -0800 Subject: can't replicate/block portscanner In-Reply-To: <21B0858B-9D58-42B8-9EA4-78D365810F3F@2xlp.com> References: <21B0858B-9D58-42B8-9EA4-78D365810F3F@2xlp.com> Message-ID: Should be fairly easy to do with any command to write data over the wire (nc/netcat/echo into /dev/tcp): echo -en 'GET / HTTP/1.0' | nc 1.2.3.4 It should be worth noting that the Host header is not a required HTTP/1.0 header, so if your app requires the Host header (or derives some other variable value from this header), you should either require HTTP/1.1, or find a way to set this header in the proxies request. The proxy_pass documentation has some discussion on setting the Host header in particular for proxy environments: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header On Fri, Dec 9, 2016 at 3:29 PM, Jonathan Vanasco wrote: > > I got hit with a portscanner a few minutes ago, which caused an edge-case > I can't repeat. > > the access log looks like this: > > 94.102.48.193 - [09/Dec/2016:22:15:03 +0000][_] 500 "GET / > HTTP/1.0" 10299 "-" "masscan/1.0 (https://github.com/ > robertdavidgraham/masscan)" "-" cookies="-" > > the server block was: > > server { > listen 80 default_server; > server_name _; > ... > } > > but there is another ip block: > > server { listen 80; > server_name ~^[0-9.]*$; > } > > > i can't figure out how to duplicate this request. the 500 was triggered, > because the upstream application server didn't get find a "HTTP_HOST" > environment variable set up, and i'd like to protect against this. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Sat Dec 10 00:47:30 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Fri, 9 Dec 2016 19:47:30 -0500 Subject: can't replicate/block portscanner In-Reply-To: References: <21B0858B-9D58-42B8-9EA4-78D365810F3F@2xlp.com> Message-ID: <9818073A-5A83-49D2-8A30-184F1243B46E@2xlp.com> On Dec 9, 2016, at 7:09 PM, Robert Paprocki wrote: > Should be fairly easy to do with any command to write data over the wire (nc/netcat/echo into /dev/tcp): Thanks for all this... I now mostly understand what was going on. The *intent* of the nginx setup was do to the following, via 3 server blocks: * ip address - redirects to example.com * example.com - goes to appserver 1 * failover domains - goes to appserver 2 (which requires the Host header) I naively expected this sort of request to be processed by the first block, not the last. Looking at the docs, it seems I just need to do an empty host block now. From nginx-forum at forum.nginx.org Sat Dec 10 18:08:13 2016 From: nginx-forum at forum.nginx.org (hemendra26) Date: Sat, 10 Dec 2016 13:08:13 -0500 Subject: nginx x-accel-redirect request method named location Message-ID: I was using nginx x-accel-redirect as an authentication frontend for an external db resource. In my python code I would do the following: /getresource/ def view(self, req, resp): name = get_dbname(req.user.id) resp.set_header('X-Accel-Redirect', '/resource/%s/' %name ) This would forward the HTTP method as well until nginx 1.10 Since nginx 1.10 all x-accel-redirects are forwarded as GET methods. >From this thread: https://forum.nginx.org/read.php?2,271372,271380#msg-271380 I understand that the correct way to forward the HTTP method is to use named location. I am unable to find documentation on how this should be done. I tried the following: def view(self, req, resp): name = get_dbname(req.user.id) resp.set_header('X-Accel-Redirect', '@resource' ) but this redirects to @resource / I would like to redirect to @resource /name Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271448,271448#msg-271448 From nginx-forum at forum.nginx.org Sun Dec 11 17:05:19 2016 From: nginx-forum at forum.nginx.org (Joergi) Date: Sun, 11 Dec 2016 12:05:19 -0500 Subject: How do I rewrite files, but only, if they are in one special folder? Message-ID: <1b9da53b293d944c25974b6d9eb2704d.NginxMailingListEnglish@forum.nginx.org> Hi guys, I am new to nginx and I need to do a few rewrites, which I need help with. I currently have this configuration: location ~ \.php5 { root /home/$username/www/; rewrite ^/(.*)\.php5 /$1.php permanent; } The problem with this is that it rewrites the files, also if they are in subfolders - and this is what I do not want. I have a root folder and inside it there is a folder called wiki/. php5 files from inside one of these two folders should get rewritten to php files. E.g. the file /api.php5 should get rewritten to /api.php. File /wiki/load.php5 should get rewritten to /wiki/load.php. But file /wiki/some-subfolder/file.php5 should *not* get rewritten, because it is neither in the root folder nor in wiki/ directly, but in one of the subfolders of wiki. Files from subfolders should *not* get rewritten. How can I get that? Joerg Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271450,271450#msg-271450 From francis at daoine.org Mon Dec 12 21:18:01 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 12 Dec 2016 21:18:01 +0000 Subject: How do I rewrite files, but only, if they are in one special folder? In-Reply-To: <1b9da53b293d944c25974b6d9eb2704d.NginxMailingListEnglish@forum.nginx.org> References: <1b9da53b293d944c25974b6d9eb2704d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161212211801.GI2958@daoine.org> On Sun, Dec 11, 2016 at 12:05:19PM -0500, Joergi wrote: Hi there, > location ~ \.php5 { > root /home/$username/www/; > rewrite ^/(.*)\.php5 /$1.php permanent; > } > > The problem with this is that it rewrites the files, also if they are in > subfolders - and this is what I do not want. You don't say what you want to happen to those other files, so I will leave it at "nothing special". Just rewrite what you want to. That is: no-slash, or /wiki/ then no-slash. rewrite ^/([^/]*)\.php5 /$1.php permanent; rewrite ^(/wiki/[^/]*\.php)5 $1 permanent; Untested, but it looks right :-) You may want to restrict these to the locations that match their prefixes, depending on what else is happening. f -- Francis Daly francis at daoine.org From 14mseesrasool at seecs.edu.pk Tue Dec 13 08:18:02 2016 From: 14mseesrasool at seecs.edu.pk (Syed Hamid Rasool) Date: Tue, 13 Dec 2016 13:18:02 +0500 Subject: Host public folder in nginx Message-ID: Hi, I am using a pre-configured webserver that comes with NGINX-RTMP module (I was having build issues even after installing all required dependencies so I had to download a VM). The config files are in /usr/local/nginx/conf while landing and stream pages are in '/usr/local/nginx/html' and '/usr/local/nginx/html/stream' and can be accessed via http:// and http:///stream respectively. I want to upload some files to be accessible so I uploaded them to '/usr/local/nginx/html/content' directory and made changes to nginx.conf by adding a location block within the server block in nginx.conf with root pointed to 'html/content' as the first location block is set to root html; I have reloaded the conf but I am getting 403 Not found error. Please help out with this simple task. Thanks. Note: VM is running on Centos x64 server 7.7 -- Regards, ?ICE -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 13 14:43:38 2016 From: nginx-forum at forum.nginx.org (alfaromeo145) Date: Tue, 13 Dec 2016 09:43:38 -0500 Subject: Erro 404 Nginx rewrite Message-ID: Hello guys, I have a PS store 1.6.1.2, with Nginx 0.9.8 and php-fpm. It is running almost 100%, just missing the Boleto and Transfer are. When I finish the payment a 404 error is generated. By my tests when I disable the rewrite it works perfectly. Anyone know or could help me what parameters need to add in nginx for rewrite ticket generation and bank transfer? Here is the url he tries to rewrite: Transfer?ncia: https://www.em4patas.com.br/module/fkpagseguroct/index.php?controller=order-confirmation&id_cart=22&id_module=69&id_order=11&key=2de101ad29ae36c225fcb58e02732b0d&cod_status=1&cod_transacao=E0376D49-60D8-4C4C-89C0-02D1407C7A2D&link_boleto=&link_transf=https://pagseguro.uol.com.br/checkout/payment/eft/print.jhtml?c=082239c57b0311a7aec297164e0206fc6b25cf7ef72b14cf1bfff3324d6c9c3f079194c2c0f27db9 Boleto https://www.em4patas.com.br/module/fkpagseguroct/index.php?controller=order-confirmation&id_cart=20&id_module=69&id_order=9&key=2de101ad29ae36c225fcb58e02732b0d&cod_status=1&cod_transacao=AE7A2E66-5169-4C76-9F61-F5CE9E74B405&link_boleto=https://pagseguro.uol.com.br/checkout/payment/booklet/print.jhtml?c=7348be634c88847921b4ef9ce6943e55f6bc50b58035c819ba4f36d93785da00da0746b69bdba55f&link_transf= My confg in prestashop: ssl on; ssl_certificate /home/admin/conf/web/ssl.em4patas.com.br.pem; ssl_certificate_key /home/admin/conf/web/ssl.em4patas.com.br.key; rewrite ^/api/?(.*)$ /webservice/dispatcher.php?url=$1 last; rewrite ^/([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$1$2.jpg last; rewrite ^/([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$1$2$3.jpg last; rewrite ^/([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$1$2$3$4.jpg last; rewrite ^/([0-9])([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$4/$1$2$3$4$5.jpg last; rewrite ^/([0-9])([0-9])([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$4/$5/$1$2$3$4$5$6.jpg last; rewrite ^/([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$4/$5/$6/$1$2$3$4$5$6$7.jpg last; rewrite ^/([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$4/$5/$6/$7/$1$2$3$4$5$6$7$8.jpg last; rewrite ^/([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])([0-9])(-[_a-zA-Z0-9-]*)?(-[0-9]+)?/.+\.jpg$ /img/p/$1/$2/$3/$4/$5/$6/$7/$8/$1$2$3$4$5$6$7$8$9.jpg last; rewrite ^/c/([0-9]+)(-[_a-zA-Z0-9-]*)(-[0-9]+)?/.+\.jpg$ /img/c/$1$2.jpg last; rewrite ^/c/([a-zA-Z-]+)(-[0-9]+)?/.+\.jpg$ /img/c/$1.jpg last; rewrite ^/([0-9]+)(-[_a-zA-Z0-9-]*)(-[0-9]+)?/.+\.jpg$ /img/c/$1$2.jpg last; try_files $uri $uri/ /index.php?$args; rewrite ^/page-not-found$ /index.php?controller=404 last; rewrite ^/address$ /index.php?controller=address last; rewrite ^/addresses$ /index.php?controller=addresses last; rewrite ^/authentication$ /index.php?controller=authentication last; rewrite ^/best-sales$ /index.php?controller=best-sales last; rewrite ^/cart$ /index.php?controller=cart last; rewrite ^/contact-us$ /index.php?controller=contact-form last; rewrite ^/discount$ /index.php?controller=discount last; rewrite ^/guest-tracking$ /index.php?controller=guest-tracking last; rewrite ^/order-history$ /index.php?controller=history last; rewrite ^/identity$ /index.php?controller=identity last; rewrite ^/manufacturers$ /index.php?controller=manufacturer last; rewrite ^/my-account$ /?controller=authentication&back=my-account last; rewrite ^/new-products$ /index.php?controller=new-products last; rewrite ^/order$ /index.php?controller=order last; rewrite ^/order-follow$ /index.php?controller=order-follow last; rewrite ^/quick-order$ /index.php?controller=order-opc last; rewrite ^/order-slip$ /index.php?controller=order-slip last; rewrite ^/password-recovery$ /index.php?controller=password last; rewrite ^/prices-drop$ /index.php?controller=prices-drop last; rewrite ^/search$ /index.php?controller=search last; rewrite ^/sitemap$ /index.php?controller=sitemap last; rewrite ^/stores$ /index.php?controller=stores last; rewrite ^/supplier$ /index.php?controller=supplier last; rewrite "^/module/([_a-zA-Z0-9-]*)/([_a-zA-Z0-9-]*)$" /index.php?fc=module&module=$1&controller=$2 last; location ~* ^.+\.(jpeg|jpg|png|gif|bmp|ico|svg|css|js)$ { expires max; } location ~ [^/]\.php(/|$) { fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } error_page 403 /error/404.html; error_page 404 /error/404.html; error_page 500 502 503 504 /error/50x.html; location /error/ { alias /home/admin/web/em4patas.com.br/document_errors/; } location ~* "/\.(htaccess|htpasswd)$" { deny all; return 404; } location /vstats/ { alias /home/admin/web/em4patas.com.br/stats/; include /home/admin/web/em4patas.com.br/stats/auth.conf*; } include /etc/nginx/conf.d/phpmyadmin.inc*; include /etc/nginx/conf.d/phppgadmin.inc*; include /etc/nginx/conf.d/webmail.inc*; include /home/admin/conf/web/snginx.em4patas.com.br.conf*; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271465,271465#msg-271465 From mdounin at mdounin.ru Tue Dec 13 15:33:22 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2016 18:33:22 +0300 Subject: nginx-1.11.7 Message-ID: <20161213153322.GY18639@mdounin.ru> Changes with nginx 1.11.7 13 Dec 2016 *) Change: now in case of a client certificate verification error the $ssl_client_verify variable contains a string with the failure reason, for example, "FAILED:certificate has expired". *) Feature: the $ssl_ciphers, $ssl_curves, $ssl_client_v_start, $ssl_client_v_end, and $ssl_client_v_remain variables. *) Feature: the "volatile" parameter of the "map" directive. *) Bugfix: dependencies specified for a module were ignored while building dynamic modules. *) Bugfix: when using HTTP/2 and the "limit_req" or "auth_request" directives client request body might be corrupted; the bug had appeared in 1.11.0. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2; the bug had appeared in 1.11.3. *) Bugfix: in the ngx_http_mp4_module. Thanks to Congcong Hu. *) Bugfix: in the ngx_http_perl_module. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Dec 13 15:58:27 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 13 Dec 2016 10:58:27 -0500 Subject: [nginx-announce] nginx-1.11.7 In-Reply-To: <20161213153328.GZ18639@mdounin.ru> References: <20161213153328.GZ18639@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.7 for Windows https://kevinworthington.com/nginxwin1117 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 13, 2016 at 10:33 AM, Maxim Dounin wrote: > Changes with nginx 1.11.7 13 Dec > 2016 > > *) Change: now in case of a client certificate verification error the > $ssl_client_verify variable contains a string with the failure > reason, for example, "FAILED:certificate has expired". > > *) Feature: the $ssl_ciphers, $ssl_curves, $ssl_client_v_start, > $ssl_client_v_end, and $ssl_client_v_remain variables. > > *) Feature: the "volatile" parameter of the "map" directive. > > *) Bugfix: dependencies specified for a module were ignored while > building dynamic modules. > > *) Bugfix: when using HTTP/2 and the "limit_req" or "auth_request" > directives client request body might be corrupted; the bug had > appeared in 1.11.0. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2; the bug had appeared in 1.11.3. > > *) Bugfix: in the ngx_http_mp4_module. > Thanks to Congcong Hu. > > *) Bugfix: in the ngx_http_perl_module. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis.norton at gmail.com Tue Dec 13 17:05:20 2016 From: francis.norton at gmail.com (Francis Norton) Date: Tue, 13 Dec 2016 17:05:20 +0000 Subject: HTTP 403 when using proxy_pass with upstream server block Message-ID: When I reverse proxy to a public echo service, I can make this work using the server name directly, but I get an HTTP 403 if I use an upstream server block pointing to the same domain. Simple Scenario: ``` location / { proxy_pass http://rve.org.uk/utils/echo-nocache.cgi/freg/; } ``` Upstream scenario: ``` upstream app_server { server rve.org.uk:80 ; } server { [...] location / { proxy_pass http://app_server/utils/echo-nocache.cgi/freg/; } ``` Following a rejected defect report (see https://trac.nginx.org/ng inx/ticket/1155#comment:1 for full nginx.conf) I have tried using: proxy_set_header Host $proxy_host; proxy_set_header Host $http_host; proxy_set_header Host $host; proxy_set_header Host rev.org.uk; None of them work in the upstream scenario, only the first, $proxy_host works for the simple scenario. I have tested this on both nginx for Windows and the default Docker Hub image (nginx/1.11.6), and had identical results. Can anyone please tell me what I am doing wrong? Thanks - Francis. -- *Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation* - Daniel Levitin, The Organised Mind -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis.norton at gmail.com Tue Dec 13 17:15:53 2016 From: francis.norton at gmail.com (Francis Norton) Date: Tue, 13 Dec 2016 17:15:53 +0000 Subject: HTTP 403 when using proxy_pass with upstream server block Message-ID: When I reverse proxy to a public echo service, I can make this work using the server name directly, but I get an HTTP 403 if I use an upstream server block pointing to the same domain. Simple Scenario: ``` location / { proxy_pass http://rve.org.uk/utils/echo-nocache.cgi/freg/; } ``` Upstream scenario: ``` upstream app_server { server rve.org.uk:80 ; } server { [...] location / { proxy_pass http://app_server/utils/echo-nocache.cgi/freg/; } ``` Following a rejected defect report (see https://trac.nginx.org/ng inx/ticket/1155#comment:1 for full nginx.conf) I have tried using: proxy_set_header Host $proxy_host; proxy_set_header Host $http_host; proxy_set_header Host $host; proxy_set_header Host rev.org.uk; None of them work in the upstream scenario, only the first, $proxy_host works for the simple scenario. I have tested this on both nginx for Windows and the default Docker Hub image (nginx/1.11.6), and had identical results. Can anyone please tell me what I am doing wrong? Thanks - Francis. -- *Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation* - Daniel Levitin, The Organised Mind -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 13 18:29:17 2016 From: nginx-forum at forum.nginx.org (Joergi) Date: Tue, 13 Dec 2016 13:29:17 -0500 Subject: How do I rewrite files, but only, if they are in one special folder? In-Reply-To: <20161212211801.GI2958@daoine.org> References: <20161212211801.GI2958@daoine.org> Message-ID: Hi Francis, thanks for your answer! > You don't say what you want to happen to those other files, so I will > leave it at "nothing special". Yes, that is right. Files in other folders, also if the names of the files end on php5, should not be rewritten. > That is: no-slash, or /wiki/ then no-slash. > > rewrite ^/([^/]*)\.php5 /$1.php permanent; > rewrite ^(/wiki/[^/]*\.php)5 $1 permanent; I am currently on the way understanding what you are doing here. Pretty sophisticated and way shorter than what I would have tried. In my tests this is working as expected. Thanks for the input! :-) > You may want to restrict these to the locations that match their > prefixes, depending on what else is happening. What do you mean? The "main folder", which you are influencing with your first rule, is the web root: /home/$username/www/. The foder wiki is the subfolder in there. Any recommendation what location I should add around the two rewrites or if I should add one? Cheers! Joerg Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271450,271475#msg-271475 From francis at daoine.org Tue Dec 13 20:22:25 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 13 Dec 2016 20:22:25 +0000 Subject: Host public folder in nginx In-Reply-To: References: Message-ID: <20161213202225.GJ2958@daoine.org> On Tue, Dec 13, 2016 at 01:18:02PM +0500, Syed Hamid Rasool via nginx wrote: Hi there, > The config files are in /usr/local/nginx/conf while landing and stream > pages are in '/usr/local/nginx/html' and '/usr/local/nginx/html/stream' and > can be accessed via http:// and http:///stream > respectively. > > I want to upload some files to be accessible so I uploaded them to > '/usr/local/nginx/html/content' directory and made changes to nginx.conf by > adding a location block within the server block in nginx.conf with root > pointed to 'html/content' as the first location block is set to root html; > I have reloaded the conf but I am getting 403 Not found error. If you look in the error log, it may show you why nginx is raising an error. Anyway: if you remove the extra bit you added to nginx.conf, that may cause things to start working for you. If you want a separate location{} for your /content urls, you should still keep "root html" active in it -- http://nginx.org/r/root for the documentation. f -- Francis Daly francis at daoine.org From emailgrant at gmail.com Tue Dec 13 22:01:34 2016 From: emailgrant at gmail.com (Grant) Date: Tue, 13 Dec 2016 14:01:34 -0800 Subject: limit_req per subnet? Message-ID: I recently suffered DoS from a series of 10 sequential IP addresses. limit_req would have dealt with the problem if a single IP address had been used. Can it be made to work in a situation like this where a series of sequential IP addresses are in play? Maybe per subnet? - Grant From lists at lazygranch.com Wed Dec 14 00:42:14 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 13 Dec 2016 16:42:14 -0800 Subject: limit_req per subnet? In-Reply-To: References: Message-ID: <20161214004214.5435470.59907.18363@lazygranch.com> That attack wasn't very distributed. ;-) Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. ?At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them. If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ? Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range. I try "traffic management" at the firewall first. As I discovered with "deny" ?in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway.? ? Original Message ? From: Grant Sent: Tuesday, December 13, 2016 2:01 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: limit_req per subnet? I recently suffered DoS from a series of 10 sequential IP addresses. limit_req would have dealt with the problem if a single IP address had been used. Can it be made to work in a situation like this where a series of sequential IP addresses are in play? Maybe per subnet? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Dec 14 06:09:32 2016 From: nginx-forum at forum.nginx.org (toffs.hl) Date: Wed, 14 Dec 2016 01:09:32 -0500 Subject: setting up a forward proxy for a few specific website only, and block the rest Message-ID: <3f0f51cb7b56872941141b9c7732214a.NginxMailingListEnglish@forum.nginx.org> hi All, Newbie to nginx, and been trying to search high and low for this particular way of configuration. Here is what I plan to do 1) Setup a nginx forward proxy, and this particular proxy server will only accept the proxy connection based on destination website, for example, I want to setup this nginx to proxy for 5 website, eg , lets call this proxy server PROXY_AAA a) www.nginx.com b) www.nginx.org c) www.freebsd.org d) www.php.net e) www.mariadb.org 2) I will setup this proxy server in cloud server provider 3) I will need to create a PAC file, and let my users to use this particular proxy PAC file for traffic re-direction, user will have to configure their browser to use proxy PAC file. 4) Whenever my users (that are using the PAC file) trying to access to the above 5 website, regardless of using HTTP or HTTPS, the proxy PAC file will get the traffic flow through my PROYX_AAA server, any other website that the user access, the traffic will go direct via exiting connection (meaning it will not send through my PROXY_AAA). 5) I also need to configure the PROXY_AAA to proxy for the above 5 website only, and block any other website or refused the connection request to access any other website, as I want this proxy server will only proxy for the domain that I configure/allow, not any other website. This is also to avoid other users to force their traffic through my proxy server. 6) Proxy connection based on source IP address is not possible, as the users IP is dynamic and changes over time. My proxy will accept any source IP, and will proxy only for the few website that i configure. The proxy PAC file help to decide the traffic should send to my proxy or direct connection. So would like to ask anyone has configure such config in nginx before ? How do I configure the nginx as forward proxy, to block all proxy request, and allow only the few website that I want to proxy ? HL. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271492,271492#msg-271492 From nginx-forum at forum.nginx.org Wed Dec 14 07:36:08 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 14 Dec 2016 02:36:08 -0500 Subject: limit_req per subnet? In-Reply-To: <20161214004214.5435470.59907.18363@lazygranch.com> References: <20161214004214.5435470.59907.18363@lazygranch.com> Message-ID: <61baf7048833e343ca49bbcb6430eb51.NginxMailingListEnglish@forum.nginx.org> I am curious what is the request uri they was hitting. Was it a dynamic page or file or a static one. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271494#msg-271494 From nginx-forum at forum.nginx.org Wed Dec 14 11:36:14 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Wed, 14 Dec 2016 06:36:14 -0500 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: References: Message-ID: <22a34a01c7c44c33323c2a7525dc2d26.NginxMailingListEnglish@forum.nginx.org> Hello ! steveh Wrote: ------------------------------------------------------- > listen 443 default_server accept_filter=httpready ssl; > listen 80 default_server accept_filter=httpready; Not related to your problem: I think you'll want "accept_filter=dataready" for your SSL configuration. Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271346,271496#msg-271496 From alexandr.porunov at gmail.com Wed Dec 14 12:08:12 2016 From: alexandr.porunov at gmail.com (Alexandr Porunov) Date: Wed, 14 Dec 2016 14:08:12 +0200 Subject: How to use Nginx as a proxy for S3 compatible storage with version 4 signature? Message-ID: Hello, I want to use Nginx as a proxy for private S3 compatible storage (i.e. It isn't s3.amazon.com but has the exactly the same API). I am novice in Nginx so I am not sure if I will explain correctly but I will try. I have 3 nodes: mydomain.com - node with nginx s3storage - private storage with S3 API client - client which wants to use S3 storage through Nginx. This client can work only with version 4 signature of S3 So, "client" sends requests to Nginx like this: https://.mydomain.com/ Now I need to proxy that requests to "s3storage" but I don't know how. The problem is that I need to send requests with version 4 signature which must use bucket name as a part of url and it is confusing. It would be easy to proxy requests like this: https://mydomain.com// but with version4 we need to send requests like: https://.mydomain.com/ The problem is that s3storage is a private node which hasn't a public domain. Only Nginx (which is a public node) can see s3storage. Does somebody know how to properly proxy such requests? Please, tell me if I missed something or the question isn't clearly asked Sincerely, Alexandr -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Dec 14 13:54:09 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 14 Dec 2016 15:54:09 +0200 Subject: How to use Nginx as a proxy for S3 compatible storage with version 4 signature? In-Reply-To: References: Message-ID: <1379159492F34C4A95C2DCC124B8B3F1@MasterPC> > It would be easy to proxy requests like this: > https://mydomain.com// > but with version4 we need to send requests like: > https://.mydomain.com/ > The problem is that s3storage is a private node which hasn't a public domain. Only Nginx (which is a public node) can see s3storage. > Does somebody know how to properly proxy such requests? If you allready have a previous working configuration for the above setup then changing the hostname which nginx uses for the backend is kind of simple ? you just need to pass a Host header which works for the S3 backend (by default nginx uses whatever is in the proxy_pass directive either IP or the name from upstream {} block). It wasn?t exactly clear (to me) how the client interacts with nginx (which is the correct url) I mean if it sends the request using https://.mydomain.com/ to nginx you can just use a simple config: A generic example (optionally the backend can be defined in seperate upstream {} block): server { server_name .mydomain.com; location / { proxy_set_header Host .mydomain.com; proxy_pass https://yours3backendhostname; } } rr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 14 14:31:49 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Dec 2016 17:31:49 +0300 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <22a34a01c7c44c33323c2a7525dc2d26.NginxMailingListEnglish@forum.nginx.org> References: <22a34a01c7c44c33323c2a7525dc2d26.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161214143126.GI18639@mdounin.ru> Hello! On Wed, Dec 14, 2016 at 06:36:14AM -0500, Alt wrote: > steveh Wrote: > ------------------------------------------------------- > > > listen 443 default_server accept_filter=httpready ssl; > > listen 80 default_server accept_filter=httpready; > > Not related to your problem: I think you'll want "accept_filter=dataready" > for your SSL configuration. Yes, but there isn't much difference: as long as httpready sees something different from a HTTP request, it just passes the connection to nginx. Quoting accf_http(9): If something other than a HTTP/1.0 or HTTP/1.1 HEAD or GET request is received the kernel will allow the application to receive the connection descriptor via accept(). So the only difference is an additional check in the kernel. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Wed Dec 14 18:30:01 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 14 Dec 2016 10:30:01 -0800 Subject: limit_req per subnet? In-Reply-To: <20161214004214.5435470.59907.18363@lazygranch.com> References: <20161214004214.5435470.59907.18363@lazygranch.com> Message-ID: > Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them. What sort of sites run into problems after doing that? I'm sure some sites need to allow cloud services to access them. A startup search engine could be run from such a service. > If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ? Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range. > > I try "traffic management" at the firewall first. As I discovered with "deny" ?in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway. It sounds like limit_req in nginx does not have any way to do this. How would you accomplish this in fail2ban? - Grant > I recently suffered DoS from a series of 10 sequential IP addresses. > limit_req would have dealt with the problem if a single IP address had > been used. Can it be made to work in a situation like this where a > series of sequential IP addresses are in play? Maybe per subnet? From emailgrant at gmail.com Wed Dec 14 18:30:48 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 14 Dec 2016 10:30:48 -0800 Subject: limit_req per subnet? In-Reply-To: <61baf7048833e343ca49bbcb6430eb51.NginxMailingListEnglish@forum.nginx.org> References: <20161214004214.5435470.59907.18363@lazygranch.com> <61baf7048833e343ca49bbcb6430eb51.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I am curious what is the request uri they was hitting. Was it a dynamic page > or file or a static one. It was semrush and it was all manner of dynamic pages. - Grant From nginx-forum at forum.nginx.org Wed Dec 14 19:13:20 2016 From: nginx-forum at forum.nginx.org (kms-pt) Date: Wed, 14 Dec 2016 14:13:20 -0500 Subject: access_logging in the stream block Message-ID: <7f4e7854240a65e7c80b656721491f04.NginxMailingListEnglish@forum.nginx.org> Hello, Just wondering if anyone knows if access_logs are able to be configured in the stream block. We are looking to implement TCP stream which works but also have the requirement of logging the connections, transactions, etc. I know error_log can be enabled but I have found no documentation stating access_log will work. We have confirmed connections via nginx will work and can connect to the backend service but no actions are logged. Sample config: stream { server { listen 12345; proxy_pass servername:12345; } } I have tried adding access_log but only get the error: Starting nginx: nginx: [emerg] "access_log" directive is not allowed here in /etc/nginx/nginx.conf:22 I also tried adding a log_format section in the event that was required. Any advice/suggestions welcome. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271518,271518#msg-271518 From lists at lazygranch.com Wed Dec 14 19:15:44 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 14 Dec 2016 11:15:44 -0800 Subject: limit_req per subnet? In-Reply-To: References: <20161214004214.5435470.59907.18363@lazygranch.com> <61baf7048833e343ca49bbcb6430eb51.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161214191544.5435470.31405.18410@lazygranch.com> ?They claim to obey robots.txt. They also claim to to use consecutive IP addresses. https://www.semrush.com/bot/ ? Some dated posts (2011) indicate semrush uses AWS. I block all of AWS IP space and can say I've never seen a semrush bot. So that might be a solution. I got the AWS IP space from some Amazon Web page.? I get a bit of kick back about blocking things that are not eyeballs like colos and VPS, but it works for me. ?I only block after seeing a hacking attempt(s) in my logs. ? ? ? Original Message ? From: Grant Sent: Wednesday, December 14, 2016 10:31 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit_req per subnet? > I am curious what is the request uri they was hitting. Was it a dynamic page > or file or a static one. It was semrush and it was all manner of dynamic pages. - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Wed Dec 14 19:49:04 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 14 Dec 2016 11:49:04 -0800 Subject: limit_req per subnet? In-Reply-To: References: <20161214004214.5435470.59907.18363@lazygranch.com> Message-ID: <20161214194904.5435470.10265.18412@lazygranch.com> I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them.? Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just ridiculous. I allow the anti-virus companies to poke around, though I can't figure out what exactly their probes accomplish. Often Intel/McAfee just pings the server, perhaps to survey hosting software and revision. Good advertising for nginx! ? Original Message ? From: Grant Sent: Wednesday, December 14, 2016 10:30 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit_req per subnet? > Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them. What sort of sites run into problems after doing that? I'm sure some sites need to allow cloud services to access them. A startup search engine could be run from such a service. > If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ? Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range. > > I try "traffic management" at the firewall first. As I discovered with "deny" ?in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway. It sounds like limit_req in nginx does not have any way to do this. How would you accomplish this in fail2ban? - Grant > I recently suffered DoS from a series of 10 sequential IP addresses. > limit_req would have dealt with the problem if a single IP address had > been used. Can it be made to work in a situation like this where a > series of sequential IP addresses are in play? Maybe per subnet? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Dec 14 20:22:53 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 14 Dec 2016 15:22:53 -0500 Subject: limit_req per subnet? In-Reply-To: References: Message-ID: <8454c2b3afba9262876d20a0e51406e5.NginxMailingListEnglish@forum.nginx.org> I rate limit them using the user-agent Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271524#msg-271524 From nginx-forum at forum.nginx.org Wed Dec 14 21:26:18 2016 From: nginx-forum at forum.nginx.org (hemendra26) Date: Wed, 14 Dec 2016 16:26:18 -0500 Subject: nginx x-accel-redirect request method named location In-Reply-To: References: Message-ID: <1137976cee0d14c5bf077dfd5eb2bb27.NginxMailingListEnglish@forum.nginx.org> hemendra26 Wrote: ------------------------------------------------------- > I was using nginx x-accel-redirect as an authentication frontend for > an external db resource. > > In my python code I would do the following: > > /getresource/ > > def view(self, req, resp): > name = get_dbname(req.user.id) > resp.set_header('X-Accel-Redirect', '/resource/%s/' %name ) > > This would forward the HTTP method as well until nginx 1.10 > Since nginx 1.10 all x-accel-redirects are forwarded as GET methods. > > From this thread: > https://forum.nginx.org/read.php?2,271372,271380#msg-271380 > > I understand that the correct way to forward the HTTP method is to use > named location. > I am unable to find documentation on how this should be done. > I tried the following: > > def view(self, req, resp): > name = get_dbname(req.user.id) > resp.set_header('X-Accel-Redirect', '@resource' ) > > but this redirects to @resource / > I would like to redirect to @resource /name Edit: Posting configs for nginx, hope someone can help. location /getresource { proxy_pass http://127.0.0.1:8000; } location /resource { internal; proxy_pass http://127.0.0.1:8888; } location @resource { internal; proxy_pass http://127.0.0.1:8888; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271448,271528#msg-271528 From emailgrant at gmail.com Wed Dec 14 21:58:06 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 14 Dec 2016 13:58:06 -0800 Subject: limit_req per subnet? In-Reply-To: <20161214194904.5435470.10265.18412@lazygranch.com> References: <20161214004214.5435470.59907.18363@lazygranch.com> <20161214194904.5435470.10265.18412@lazygranch.com> Message-ID: > I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other post indicates semrush resides on AWS, so just block AWS. I doubt there is any harm in blocking AWS since no major search engine uses them. > > Regarding search engines, the reality is only Google matters. Just look at your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just ridiculous. I allow the anti-virus companies to poke around, though I can't figure out what exactly their probes accomplish. Often Intel/McAfee just pings the server, perhaps to survey hosting software and revision. Good advertising for nginx! I would really prefer not to block cloud services. It sounds like an admin headache down the road. nginx limit_req works great for a single IP attacker, but all it takes is 3 IPs for an attacker to triple his allowable rate, even from sequential IPs? I'm surprised there's no way to combat this. - Grant >> Did you see if the IPs were from an ISP? If not, I'd ban the service using the Hurricane Electric BGP as a guide. At a minimum, you should be blocking the major cloud services, especially OVH. They offer free trial accounts, so of course the hackers abuse them. > > > What sort of sites run into problems after doing that? I'm sure some > sites need to allow cloud services to access them. A startup search > engine could be run from such a service. > > >> If the attack was from an ISP, I can visualize a fail2ban scheme blocking the last quad not being too hard to implement . That is block xxx.xxx.xxx.0/24. ? Or maybe just let a typical fail2ban set up do your limiting and don't get fancy about the IP range. >> >> I try "traffic management" at the firewall first. As I discovered with "deny" ?in nginx, much CPU work is still done prior to ignoring the request. (I don't recall the details exactly, but there is a thread I started on the topic in this list.) Better to block via the firewall since you will be running one anyway. > > > It sounds like limit_req in nginx does not have any way to do this. > How would you accomplish this in fail2ban? > > >> I recently suffered DoS from a series of 10 sequential IP addresses. >> limit_req would have dealt with the problem if a single IP address had >> been used. Can it be made to work in a situation like this where a >> series of sequential IP addresses are in play? Maybe per subnet? From emailgrant at gmail.com Wed Dec 14 22:01:27 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 14 Dec 2016 14:01:27 -0800 Subject: limit_req per subnet? In-Reply-To: <8454c2b3afba9262876d20a0e51406e5.NginxMailingListEnglish@forum.nginx.org> References: <8454c2b3afba9262876d20a0e51406e5.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I rate limit them using the user-agent Maybe this is the best solution, although of course it doesn't rate limit real attackers. Is there a good method for monitoring which UAs request pages above a certain rate so I can write a limit for them? - Grant From emailgrant at gmail.com Wed Dec 14 22:15:16 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 14 Dec 2016 14:15:16 -0800 Subject: limit_req per subnet? In-Reply-To: References: <8454c2b3afba9262876d20a0e51406e5.NginxMailingListEnglish@forum.nginx.org> Message-ID: >> I rate limit them using the user-agent > > > Maybe this is the best solution, although of course it doesn't rate > limit real attackers. Is there a good method for monitoring which UAs > request pages above a certain rate so I can write a limit for them? Actually, is there a way to limit rate by UA on the fly? If so, can I do that and somehow avoid limiting multiple legitimate browsers with the same UA? - Grant From lists at lazygranch.com Thu Dec 15 00:06:44 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 14 Dec 2016 16:06:44 -0800 Subject: limit_req per subnet? In-Reply-To: References: <8454c2b3afba9262876d20a0e51406e5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161215000644.5435470.54023.18430@lazygranch.com> By the time you get to UA, nginx has done a lot of work.? You could 444 based on UA, then read that code in the log file with fail2ban or a clever script. ?That way you can block them at the firewall. It won't help immediately with the sequential number, but that really won't be a problem.? ? Original Message ? From: Grant Sent: Wednesday, December 14, 2016 2:15 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit_req per subnet? >> I rate limit them using the user-agent > > > Maybe this is the best solution, although of course it doesn't rate > limit real attackers. Is there a good method for monitoring which UAs > request pages above a certain rate so I can write a limit for them? Actually, is there a way to limit rate by UA on the fly? If so, can I do that and somehow avoid limiting multiple legitimate browsers with the same UA? - Grant _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Dec 15 01:24:33 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 14 Dec 2016 20:24:33 -0500 Subject: limit_req per subnet? In-Reply-To: References: Message-ID: <5473e3a6441c289581cc374872521b02.NginxMailingListEnglish@forum.nginx.org> I've inplemented something based on https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/ Works perfectly fine for me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271535#msg-271535 From nginx-forum at forum.nginx.org Thu Dec 15 04:04:12 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 14 Dec 2016 23:04:12 -0500 Subject: limit_req per subnet? In-Reply-To: References: Message-ID: <0a6401fdb3707f9b0b3da6feecc6c620.NginxMailingListEnglish@forum.nginx.org> proxy_cache / fastcgi_cache the pages output will help. Flood all you want Nginx handles flooding and lots of connections fine your back end is your weakness / bottleneck that is allowing them to be successful in effecting your service. You could also use the secure_link module to help on your index.php or .html what ever it is you have going on that generates the link they are attacking, You can generate a unique hash that expires for that IP only. There are allot of solutions. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271537#msg-271537 From lists at lazygranch.com Thu Dec 15 08:11:41 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 15 Dec 2016 00:11:41 -0800 Subject: limit_req per subnet? In-Reply-To: <5473e3a6441c289581cc374872521b02.NginxMailingListEnglish@forum.nginx.org> References: <5473e3a6441c289581cc374872521b02.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161215081141.5435470.75362.18442@lazygranch.com> This is an interesting bit of code. However if you are being ddos-ed, this just eliminates nginx from replying. It isn't like nginx is isolated from the attack. I would still rather block the IP at the firewall and prevent nginx fr?om doing any action.? The use of $bot_agent opens up a lot of possibilities of the value can be fed to the log file. ? Original Message ? From: shiz Sent: Wednesday, December 14, 2016 5:24 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit_req per subnet? I've inplemented something based on https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/ Works perfectly fine for me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271535#msg-271535 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From maxim at nginx.com Thu Dec 15 09:28:08 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 15 Dec 2016 12:28:08 +0300 Subject: access_logging in the stream block In-Reply-To: <7f4e7854240a65e7c80b656721491f04.NginxMailingListEnglish@forum.nginx.org> References: <7f4e7854240a65e7c80b656721491f04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <97ffdec3-0e9d-c8a0-8946-e2758dd27697@nginx.com> Hello, On 12/14/16 10:13 PM, kms-pt wrote: > Hello, > > Just wondering if anyone knows if access_logs are able to be configured in > the stream block. We are looking to implement TCP stream which works but > also have the requirement of logging the connections, transactions, etc. I > know error_log can be enabled but I have found no documentation stating > access_log will work. > > We have confirmed connections via nginx will work and can connect to the > backend service but no actions are logged. > > > Sample config: > > stream { > server { > listen 12345; > proxy_pass servername:12345; > > } > } > > I have tried adding access_log but only get the error: > Starting nginx: nginx: [emerg] "access_log" directive is not allowed here in > /etc/nginx/nginx.conf:22 > > I also tried adding a log_format section in the event that was required. Any > advice/suggestions welcome. > You are probably using an old version of nginx -- ngx_stream_log_module was added in nginx-1.11.4. -- Maxim Konovalov From nginx-forum at forum.nginx.org Thu Dec 15 09:38:18 2016 From: nginx-forum at forum.nginx.org (miracle.max) Date: Thu, 15 Dec 2016 04:38:18 -0500 Subject: cache worker stops evicting assets Message-ID: Hello there! we currently have this issue when we restart nginx, the cache zone disk consume rise constantly until we reach the 84h after the restart, here nginx locks and start deleting, after 15-30m everything starts working as usual and the cache worker behaves as expected until we do another restart. Our current configuration is proxy_cache_path /var/cache/nginx/assets levels=2:2 keys_zone=assets:512m inactive=84h max_size=81920m use_temp_path=off loader_files=1000 loader_sleep=50ms loader_threshold=300ms; We currently have ?2 million object consuming ?40G. Could be that cache loader worker cant keep up with all those objects after the restart? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271544,271544#msg-271544 From nginx-forum at forum.nginx.org Thu Dec 15 10:04:33 2016 From: nginx-forum at forum.nginx.org (evgeny.morokin) Date: Thu, 15 Dec 2016 05:04:33 -0500 Subject: Nginx to Nginx TCP Fast Open Message-ID: <88361a0a9eaf076754d6ecbd2baa0f5d.NginxMailingListEnglish@forum.nginx.org> Hi, can someone clarify - If TFO is properly enabled on both systems Nginx reverse-proxy and Nginx upstream, will both use it in communication between each other or not. Have a great day, Evgeny Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271545,271545#msg-271545 From nginx-forum at forum.nginx.org Thu Dec 15 10:23:15 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 15 Dec 2016 05:23:15 -0500 Subject: limit_req per subnet? In-Reply-To: <20161215081141.5435470.75362.18442@lazygranch.com> References: <20161215081141.5435470.75362.18442@lazygranch.com> Message-ID: <2a56ad507715de7e1001596647f25437.NginxMailingListEnglish@forum.nginx.org> gariac Wrote: ------------------------------------------------------- > This is an interesting bit of code. However if you are being ddos-ed, > this just eliminates nginx from replying. It isn't like nginx is > isolated from the attack. I would still rather block the IP at the > firewall and prevent nginx fr?om doing any action.? > > The use of $bot_agent opens up a lot of possibilities of the value can > be fed to the log file. > ? Original Message ? > From: shiz > Sent: Wednesday, December 14, 2016 5:24 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: limit_req per subnet? > > I've inplemented something based on > https://community.centminmod.com/threads/blocking-bad-or-aggressive-bo > ts.6433/ > > Works perfectly fine for me. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,271483,271535#msg-271535 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Any layer 7 attack that Nginx begins struggling to accept connections is a successful one and at that point should be blocked at a router level. But Nginx handles allot of connections very well hence why the limit_conn and limit_req modules exist because the majority of layer 7 attacks Nginx won't have a problem denying them itself. The bottle necks are backend processes like MySQL, PHP, Python, If they clog up accepting traffic Nginx will run out of connections available to keep serving other requests for different files / paths on the server. http://nginx.org/en/docs/ngx_core_module.html#worker_connections that is the cause to your entire Nginx server going slow / unresponsive at that point even the 503 error and 500x errors won't display, all connections begin to time out and at this point you should block those IP's exhausting Nginx's server connections at a router level since Nginx can no longer cope. Nginx has small footprint in resources used layer 7 based attacks you should only start blocking at a router level when Nginx can no longer handle them fine on its own and begins timing out due to worker_connections getting exhausted. But it is rare that a attack is large enough to exhaust those and you can increase worker_connections and decrease timeout values to fix that easily. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271546#msg-271546 From nginx-forum at forum.nginx.org Thu Dec 15 10:36:43 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Thu, 15 Dec 2016 05:36:43 -0500 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <20161214143126.GI18639@mdounin.ru> References: <20161214143126.GI18639@mdounin.ru> Message-ID: <33e144cc943d69c840da07da48fa83f8.NginxMailingListEnglish@forum.nginx.org> Hello :-) Maxim Dounin Wrote: ------------------------------------------------------- > Yes, but there isn't much difference: as long as httpready sees > something different from a HTTP request, it just passes the > connection to nginx. > > Quoting accf_http(9): > > If something other than a HTTP/1.0 or HTTP/1.1 HEAD or GET > request is > received the kernel will allow the application to receive the > connection > descriptor via accept(). > > So the only difference is an additional check in the kernel. Thanks Maxim! Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271346,271548#msg-271548 From ruz at sports.ru Thu Dec 15 11:21:21 2016 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Thu, 15 Dec 2016 14:21:21 +0300 Subject: nginx x-accel-redirect request method named location In-Reply-To: References: Message-ID: On Sat, Dec 10, 2016 at 9:08 PM, hemendra26 wrote: > I was using nginx x-accel-redirect as an authentication frontend for an > external db resource. > > In my python code I would do the following: > > /getresource/ > > def view(self, req, resp): > name = get_dbname(req.user.id) > resp.set_header('X-Accel-Redirect', '/resource/%s/' %name ) > > This would forward the HTTP method as well until nginx 1.10 > Since nginx 1.10 all x-accel-redirects are forwarded as GET methods. > > From this thread: > https://forum.nginx.org/read.php?2,271372,271380#msg-271380 > > I understand that the correct way to forward the HTTP method is to use > named > location. > I am unable to find documentation on how this should be done. > I tried the following: > > def view(self, req, resp): > name = get_dbname(req.user.id) > resp.set_header('X-Accel-Redirect', '@resource' ) > > but this redirects to @resource / > I would like to redirect to @resource /name > Hi, Here what you do. As you can not use X-Accel-Redirect to set different location, you should set other header with location and in nginx config do something like this: location @resources { set $stored_real_location $upstream_http_x_real_location; proxy_pass http://resources-backend$stored_real_location; } In example above Python code should set the following headers: X-Accel-Redirect: @resources X-Real-Location: /some/other/path... Does it help? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271448,271448#msg-271448 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Thu Dec 15 11:30:17 2016 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Thu, 15 Dec 2016 14:30:17 +0300 Subject: rewrite cycle Message-ID: Hi, Below is default foswiki config that falls into "rewrite or internal redirection cycle while processing "/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/Main/WebHome"". This is Nginx 1.11.6. Any ideas? location = / { root $foswiki_root; rewrite .* /Main/WebHome; } location ~ ^/([A-Z_].*)$ { rewrite ^/(.*)$ /bin/view/$1; } location ~ ^/bin/([a-z]+) { fastcgi_param SCRIPT_NAME $1; gzip off; #fastcgi_pass unix:/var/run/nginx/foswiki.sock; fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(/bin/\w+)(.*); fastcgi_param SCRIPT_FILENAME $foswiki_root/$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } Debug log: 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: "/" 2016/12/15 14:24:02 [debug] 15695#0: *6 using configuration "=/" 2016/12/15 14:24:02 [debug] 15695#0: *6 http cl:-1 max:104857600 2016/12/15 14:24:02 [debug] 15695#0: *6 rewrite phase: 2 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex: ".*" 2016/12/15 14:24:02 [notice] 15695#0: *6 ".*" matches "/", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", host: " wiki.sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script copy: "/Main/WebHome" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex end 2016/12/15 14:24:02 [notice] 15695#0: *6 rewritten data: "/Main/WebHome", args: "", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", hos t: "wiki.sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 post rewrite phase: 3 2016/12/15 14:24:02 [debug] 15695#0: *6 uri changes: 11 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: "/" 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: ~ "^/([A-Z_].*)$" 2016/12/15 14:24:02 [debug] 15695#0: *6 using configuration "^/([A-Z_].*)$" 2016/12/15 14:24:02 [debug] 15695#0: *6 http cl:-1 max:104857600 2016/12/15 14:24:02 [debug] 15695#0: *6 rewrite phase: 2 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex: "^/(.*)$" 2016/12/15 14:24:02 [notice] 15695#0: *6 "^/(.*)$" matches "/Main/WebHome", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", host: "wiki .sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script copy: "/bin/view/" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script capture: "Main/WebHome" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex end 2016/12/15 14:24:02 [notice] 15695#0: *6 rewritten data: "/bin/view/Main/WebHome", args: "", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", host: "wiki.sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 post rewrite phase: 3 2016/12/15 14:24:02 [debug] 15695#0: *6 uri changes: 10 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: "/" 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: ~ "^/([A-Z_].*)$" 2016/12/15 14:24:02 [debug] 15695#0: *6 using configuration "^/([A-Z_].*)$" 2016/12/15 14:24:02 [debug] 15695#0: *6 http cl:-1 max:104857600 2016/12/15 14:24:02 [debug] 15695#0: *6 rewrite phase: 2 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex: "^/(.*)$" 2016/12/15 14:24:02 [notice] 15695#0: *6 "^/(.*)$" matches "/bin/view/Main/WebHome", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", host: "wiki.sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script copy: "/bin/view/" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script capture: "bin/view/Main/WebHome" 2016/12/15 14:24:02 [debug] 15695#0: *6 http script regex end 2016/12/15 14:24:02 [notice] 15695#0: *6 rewritten data: "/bin/view/bin/view/Main/WebHome", args: "", client: 127.0.0.1, server: wiki.sports.ru, request: "GET / HTTP/1.1", host: "wiki.sports.ru" 2016/12/15 14:24:02 [debug] 15695#0: *6 post rewrite phase: 3 -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Dec 15 12:11:52 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 15 Dec 2016 17:11:52 +0500 Subject: Supernet issues in nginx geo !! Message-ID: Hi, We're using nginx geo module to redirect requests from specific subnets/supernets. If request is coming from following ips, it should be redirected towards caching node : geo $qwilt_user { default 0; 182.184.25.66/32 1; 103.28.152.0/22 1; 203.135.0.0/18 1; 203.99.0.0/16 1; 116.71.0.0/16 1; 59.103.0.0/16 1; 119.152.0.0/13 1; *39.32.0.0/11 1;* The critical problem here now is that if request is coming from ip 39.45.111.1 its not redirecting but you can clearly see from subnet bit above which is /11 , means subnets from 39.32.X.X all the way to 39.63.X.X are summed up within subnet 39.32.0.0/11 & supposed to be redirect. Is this limitation of the geo module that it is not supporting supernetting ? Thanks in advance !! Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Dec 15 12:35:28 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 15 Dec 2016 17:35:28 +0500 Subject: Supernet issues in nginx geo !! In-Reply-To: References: Message-ID: Here is the break down of subnet 39.32.0.0/11 : http://prntscr.com/djq88m According to this, geo policy should be apply to 39.45.X.X as well but its not . On Thu, Dec 15, 2016 at 5:11 PM, shahzaib mushtaq wrote: > Hi, > > We're using nginx geo module to redirect requests from specific > subnets/supernets. If request is coming from following ips, it should be > redirected towards caching node : > > geo $qwilt_user { > default 0; > 182.184.25.66/32 1; > 103.28.152.0/22 1; > 203.135.0.0/18 1; > 203.99.0.0/16 1; > 116.71.0.0/16 1; > 59.103.0.0/16 1; > 119.152.0.0/13 1; > *39.32.0.0/11 1;* > > The critical problem here now is that if request is coming from ip > 39.45.111.1 its not redirecting but you can clearly see from subnet bit > above which is /11 , means subnets from 39.32.X.X all the way to 39.63.X.X > are summed up within subnet 39.32.0.0/11 & supposed to be redirect. > > Is this limitation of the geo module that it is not supporting > supernetting ? > > Thanks in advance !! > > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Dec 15 13:06:38 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 15 Dec 2016 18:06:38 +0500 Subject: Supernet issues in nginx geo !! In-Reply-To: References: Message-ID: Hi, Sorry guys it was my bad. We have policy to only redirect HTTPS requests, though the request coming from this ip was with HTTP hence no map policy on it. In short, things are working as expected :) Shahzaib On Thu, Dec 15, 2016 at 5:35 PM, shahzaib mushtaq wrote: > Here is the break down of subnet 39.32.0.0/11 : > > http://prntscr.com/djq88m > > According to this, geo policy should be apply to 39.45.X.X as well but its > not . > > On Thu, Dec 15, 2016 at 5:11 PM, shahzaib mushtaq > wrote: > >> Hi, >> >> We're using nginx geo module to redirect requests from specific >> subnets/supernets. If request is coming from following ips, it should be >> redirected towards caching node : >> >> geo $qwilt_user { >> default 0; >> 182.184.25.66/32 1; >> 103.28.152.0/22 1; >> 203.135.0.0/18 1; >> 203.99.0.0/16 1; >> 116.71.0.0/16 1; >> 59.103.0.0/16 1; >> 119.152.0.0/13 1; >> *39.32.0.0/11 1;* >> >> The critical problem here now is that if request is coming from ip >> 39.45.111.1 its not redirecting but you can clearly see from subnet bit >> above which is /11 , means subnets from 39.32.X.X all the way to 39.63.X.X >> are summed up within subnet 39.32.0.0/11 & supposed to be redirect. >> >> Is this limitation of the geo module that it is not supporting >> supernetting ? >> >> Thanks in advance !! >> >> Shahzaib >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Dec 15 13:18:55 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 15 Dec 2016 13:18:55 +0000 Subject: rewrite cycle In-Reply-To: References: Message-ID: <20161215131855.GK2958@daoine.org> On Thu, Dec 15, 2016 at 02:30:17PM +0300, ?????? ??????? wrote: Hi there, > Below is default foswiki config that falls into "rewrite or internal > redirection cycle while processing > "/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/Main/WebHome"". > > This is Nginx 1.11.6. > > Any ideas? It seems to do what it is configured to do. What do you want it to do instead? > location = / { > root $foswiki_root; > rewrite .* /Main/WebHome; > } > location ~ ^/([A-Z_].*)$ { > rewrite ^/(.*)$ /bin/view/$1; > } One possibility it to change that location regex, which is currently "everything", to instead be only what is wanted. > location ~ ^/bin/([a-z]+) { Another possibility is to move this location above the "regex everything" one, so that most things that start with /bin/ will use this instead of the other. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Dec 15 13:48:42 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2016 16:48:42 +0300 Subject: cache worker stops evicting assets In-Reply-To: References: Message-ID: <20161215134841.GQ18639@mdounin.ru> Hello! On Thu, Dec 15, 2016 at 04:38:18AM -0500, miracle.max wrote: > Hello there! we currently have this issue when we restart nginx, the cache > zone disk consume rise constantly until we reach the 84h after the restart, > here nginx locks and start deleting, after 15-30m everything starts working > as usual and the cache worker behaves as expected until we do another > restart. > > Our current configuration is > > proxy_cache_path /var/cache/nginx/assets levels=2:2 keys_zone=assets:512m > inactive=84h max_size=81920m use_temp_path=off > loader_files=1000 loader_sleep=50ms > loader_threshold=300ms; > > We currently have ?2 million object consuming ?40G. > Could be that cache loader worker cant keep up with all those objects after > the restart? Inactive times are only recorded in memory and therefore are lost on restart. As a result nothing will be deleted from cache based on inactive time for 84 hours after a restart (note "inactive=84h" in your configuration). Cache manager will still monitor cache size and remove oldest items based on max_size configured, but it doesn't look like you reach it. After 84 hours cache manager will start removing items that were not accessed since restart. If this implies significant load in your setup, consider looking at manager_files, manager_sleep, and manager_threshold parameters of the proxy_cache_path directive as introduced in nginx 1.11.5. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 15 13:53:16 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2016 16:53:16 +0300 Subject: Nginx to Nginx TCP Fast Open In-Reply-To: <88361a0a9eaf076754d6ecbd2baa0f5d.NginxMailingListEnglish@forum.nginx.org> References: <88361a0a9eaf076754d6ecbd2baa0f5d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161215135316.GR18639@mdounin.ru> Hello! On Thu, Dec 15, 2016 at 05:04:33AM -0500, evgeny.morokin wrote: > Hi, can someone clarify - If TFO is properly enabled on both systems Nginx > reverse-proxy and Nginx upstream, will both use it in communication between > each other or not. No. nginx is able to handle requests with TFO (if configured with the "fastopen" parameter of the "listen" directive, http://nginx.org/r/listen), but it doesn't try to use TFO in requests to upstream servers. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 15 14:08:39 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2016 17:08:39 +0300 Subject: rewrite cycle In-Reply-To: References: Message-ID: <20161215140839.GS18639@mdounin.ru> Hello! On Thu, Dec 15, 2016 at 02:30:17PM +0300, ?????? ??????? wrote: > Below is default foswiki config that falls into "rewrite or internal > redirection cycle while processing > "/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/bin/view/Main/WebHome"". > > This is Nginx 1.11.6. > > Any ideas? > > location = / { > root $foswiki_root; > rewrite .* /Main/WebHome; > } > location ~ ^/([A-Z_].*)$ { > rewrite ^/(.*)$ /bin/view/$1; > } > location ~ ^/bin/([a-z]+) { [...] > 2016/12/15 14:24:02 [notice] 15695#0: *6 rewritten data: "/bin/view/Main/WebHome", ... [...] > 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: ~ "^/([A-Z_].*)$" > 2016/12/15 14:24:02 [debug] 15695#0: *6 using configuration "^/([A-Z_].*)$" The configuration in question relies on case-sensitive location matching and won't work correctly with case-insensitive location matching nginx uses on Windows and macOS. As per the debug log, it looks like case-insensitive location matching is used in your case. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Dec 15 15:00:03 2016 From: nginx-forum at forum.nginx.org (evgeny.morokin) Date: Thu, 15 Dec 2016 10:00:03 -0500 Subject: Nginx to Nginx TCP Fast Open In-Reply-To: <20161215135316.GR18639@mdounin.ru> References: <20161215135316.GR18639@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > No. nginx is able to handle requests with TFO (if configured > with the "fastopen" parameter of the "listen" directive, > http://nginx.org/r/listen), but it doesn't try to use TFO in > requests to upstream servers. Maxim, thank you for the exact answer, are you planning to add this feature in the future? Best regards, Evgeny Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271545,271560#msg-271560 From ruz at sports.ru Thu Dec 15 15:09:54 2016 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Thu, 15 Dec 2016 18:09:54 +0300 Subject: rewrite cycle In-Reply-To: <20161215140839.GS18639@mdounin.ru> References: <20161215140839.GS18639@mdounin.ru> Message-ID: On Thu, Dec 15, 2016 at 5:08 PM, Maxim Dounin wrote: > [...] > > > 2016/12/15 14:24:02 [notice] 15695#0: *6 rewritten data: > "/bin/view/Main/WebHome", ... > > [...] > > > 2016/12/15 14:24:02 [debug] 15695#0: *6 test location: ~ "^/([A-Z_].*)$" > > 2016/12/15 14:24:02 [debug] 15695#0: *6 using configuration > "^/([A-Z_].*)$" > > The configuration in question relies on case-sensitive location > matching and won't work correctly with case-insensitive location > matching nginx uses on Windows and macOS. As per the debug log, > it looks like case-insensitive location matching is used in your > case. > Yep, it's Mac OS. Solved by turning off case insensitivity with (?-i). location ~ ^(?-i)/[A-Z_].*$ { ... } Didn't know about this aspect of nginx on Mac and Win. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 15 15:32:08 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2016 18:32:08 +0300 Subject: Nginx to Nginx TCP Fast Open In-Reply-To: References: <20161215135316.GR18639@mdounin.ru> Message-ID: <20161215153207.GU18639@mdounin.ru> Hello! On Thu, Dec 15, 2016 at 10:00:03AM -0500, evgeny.morokin wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > No. nginx is able to handle requests with TFO (if configured > > with the "fastopen" parameter of the "listen" directive, > > http://nginx.org/r/listen), but it doesn't try to use TFO in > > requests to upstream servers. > > Maxim, thank you for the exact answer, are you planning to add this feature > in the future? No, there are no such plans. -- Maxim Dounin http://nginx.org/ From thomas at glanzmann.de Thu Dec 15 16:08:41 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 15 Dec 2016 17:08:41 +0100 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect Message-ID: <20161215160841.GA21203@glanzmann.de> Hello, I would like to use ngx_stream_ssl_preread_module to multiplex between a squid, nginx webserver and ocserv (ssl vpn). I setup nginx the following way: stream { upstream webserver { server 127.0.0.1:443; } upstream squidtls { server 127.0.0.1:8081; } upstream ocserv { server 88.198.249.254:4443; } map $ssl_preread_server_name $name { proxy.glanzmann.de squidtls; vpn.gmvl.de ocserv; default webserver; } server { proxy_protocol on; listen 88.198.249.254:443; listen [2a01:4f8:b0:2fff::2]:443; proxy_pass $name; ssl_preread on; } } For the webserver and squid it works like a charm. However when I connect using 'openconnect' I get the ssl certificate of the webserver, but should get the ssl certificate of the ocserv. I verified using tcpdump and wireshark that openconnect sets the servername correctly. How can I debug this? Is it possible to tell nginx to be more verbose so that I can see if it extracts the SNI string of openconnect correctly or see that maybe nginx is unable to conenct to the ocserv and falls back to the default? Cheers, Thomas From arut at nginx.com Thu Dec 15 16:20:19 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 15 Dec 2016 19:20:19 +0300 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215160841.GA21203@glanzmann.de> References: <20161215160841.GA21203@glanzmann.de> Message-ID: <20161215162019.GP35892@Romans-MacBook-Air.local> Hi Thomas, On Thu, Dec 15, 2016 at 05:08:41PM +0100, Thomas Glanzmann wrote: > Hello, > I would like to use ngx_stream_ssl_preread_module to multiplex between a > squid, nginx webserver and ocserv (ssl vpn). I setup nginx the following > way: > > stream { > upstream webserver { > server 127.0.0.1:443; > } > > upstream squidtls { > server 127.0.0.1:8081; > } > > upstream ocserv { > server 88.198.249.254:4443; > } > > map $ssl_preread_server_name $name { > proxy.glanzmann.de squidtls; > vpn.gmvl.de ocserv; > default webserver; > } > > server { > proxy_protocol on; > listen 88.198.249.254:443; > listen [2a01:4f8:b0:2fff::2]:443; > > proxy_pass $name; > ssl_preread on; > } > } > > For the webserver and squid it works like a charm. However when I connect using > 'openconnect' I get the ssl certificate of the webserver, but should get the ssl > certificate of the ocserv. I verified using tcpdump and wireshark that > openconnect sets the servername correctly. How can I debug this? > > Is it possible to tell nginx to be more verbose so that I can see if it > extracts the SNI string of openconnect correctly or see that maybe nginx > is unable to conenct to the ocserv and falls back to the default? You can try logging $ssl_preread_server_name in access_log. And it can be a good idea to watch the debug log for ssl preread messages. -- Roman Arutyunyan From thomas at glanzmann.de Thu Dec 15 16:22:00 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 15 Dec 2016 17:22:00 +0100 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215160841.GA21203@glanzmann.de> References: <20161215160841.GA21203@glanzmann.de> Message-ID: <20161215162200.GB21203@glanzmann.de> Hello, > How can someone debug ngx_stream_ssl_preread_module? put the following line in the stream section: error_log /var/log/nginx/sni_error.log debug; Once done I found out that 2016/12/15 17:09:00 [error] 21043#0: *7426 recv() failed (104: Connection reset by peer) while proxying connection, client: 17.198.249.166, server: 88.198.249.254:443, upstream: "88.198.249.254:4443", bytes from/to client:0/0, bytes from/to upstream:0/316 And in my syslog I found out: daemon:Dec 15 17:09:00 infra ocserv[21622]: worker: worker-proxyproto.c:156: proxy-hdr: invalid v2 header daemon:Dec 15 17:09:00 infra ocserv[21622]: worker: worker-vpn.c:560: could not parse proxy protocol header; discarding connection daemon:Dec 15 17:09:00 infra ocserv[18385]: main: 88.198.249.254:55976 user disconnected (reason: unspecified, rx: 0, tx: 0) So it seems that the problem is that ocserv can't parse nginx proxy protocol header. I'll dig deeper and report back once a solution is found. Cheers, Thomas From thomas at glanzmann.de Thu Dec 15 16:50:48 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 15 Dec 2016 17:50:48 +0100 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215162019.GP35892@Romans-MacBook-Air.local> References: <20161215160841.GA21203@glanzmann.de> <20161215162019.GP35892@Romans-MacBook-Air.local> Message-ID: <20161215165048.GA23728@glanzmann.de> Hello Roman, > You can try logging $ssl_preread_server_name in access_log. thank you. It seems that nginx is not able to extract the server_name from openconnect correctly: 2a01:598:8181:37ef:95e1:682:4c98:449e - [15/Dec/2016:17:45:57 +0100] "" When I connect with a browser: 2a01:598:8181:37ef:95e1:682:4c98:449e - [15/Dec/2016:17:46:20 +0100] "vpn.gmvl.de" This seems to be one problem. And another problem seems that backend communication between nginx and ocserv using the proxy protocol. Here is tcpdump of the openconnect ssl handshake with nginx: https://thomas.glanzmann.de/tmp/openconnect_sni.pcap I'm using the command line 'openconnect vpn.gmvl.de'. Cheers, Thomas From arut at nginx.com Thu Dec 15 17:22:16 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 15 Dec 2016 20:22:16 +0300 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215165048.GA23728@glanzmann.de> References: <20161215160841.GA21203@glanzmann.de> <20161215162019.GP35892@Romans-MacBook-Air.local> <20161215165048.GA23728@glanzmann.de> Message-ID: <20161215172216.GQ35892@Romans-MacBook-Air.local> Hi Thomas, On Thu, Dec 15, 2016 at 05:50:48PM +0100, Thomas Glanzmann wrote: > Hello Roman, > > > You can try logging $ssl_preread_server_name in access_log. > > thank you. It seems that nginx is not able to extract the server_name > from openconnect correctly: > > 2a01:598:8181:37ef:95e1:682:4c98:449e - [15/Dec/2016:17:45:57 +0100] "" > > When I connect with a browser: > > 2a01:598:8181:37ef:95e1:682:4c98:449e - [15/Dec/2016:17:46:20 +0100] "vpn.gmvl.de" > > This seems to be one problem. And another problem seems that backend > communication between nginx and ocserv using the proxy protocol. > > Here is tcpdump of the openconnect ssl handshake with nginx: > > https://thomas.glanzmann.de/tmp/openconnect_sni.pcap > > I'm using the command line 'openconnect vpn.gmvl.de'. Please try the attached patch. -- Roman Arutyunyan -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1481822378 -10800 # Thu Dec 15 20:19:38 2016 +0300 # Node ID 424e4b3b9c861df69360d2bf7d7efce495c27ea7 # Parent da5604455090c04fbdc2114b9de46a3bb9b30e78 Stream ssl_preread: relaxed SSL version check. SSL version 3.0 can be specified by the client at the record level for compatibility reasons. Previously, ssl_preread module rejected such connections, presuming they don't have SNI. Now SSL 3.0 is allowed at record level. diff --git a/src/stream/ngx_stream_ssl_preread_module.c b/src/stream/ngx_stream_ssl_preread_module.c --- a/src/stream/ngx_stream_ssl_preread_module.c +++ b/src/stream/ngx_stream_ssl_preread_module.c @@ -142,7 +142,7 @@ ngx_stream_ssl_preread_handler(ngx_strea return NGX_DECLINED; } - if (p[1] != 3 || p[2] == 0) { + if (p[1] != 3) { ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0, "ssl preread: unsupported SSL version"); return NGX_DECLINED; From thomas at glanzmann.de Thu Dec 15 21:26:29 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 15 Dec 2016 22:26:29 +0100 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215172216.GQ35892@Romans-MacBook-Air.local> References: <20161215160841.GA21203@glanzmann.de> <20161215162019.GP35892@Romans-MacBook-Air.local> <20161215165048.GA23728@glanzmann.de> <20161215172216.GQ35892@Romans-MacBook-Air.local> Message-ID: <20161215212629.GB23728@glanzmann.de> Hello Roman, > Please try the attached patch. thank you for the patch. The patch solves my SNI problem: 185.46.137.5 - [15/Dec/2016:22:25:00 +0100] "vpn.gmvl.de" Cheers, Thomas From thomas at glanzmann.de Thu Dec 15 21:36:37 2016 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 15 Dec 2016 22:36:37 +0100 Subject: Use nginx ngx_stream_ssl_preread_module to connect to ocserv using proxy protocol v2 In-Reply-To: References: <20161215163203.GA23192@glanzmann.de> Message-ID: <20161215213637.GC23728@glanzmann.de> Hello Nikos, > Are you sure that the nginx module you are using supports the proxy > protocol version 2? you're probably right. Nginx seems to support only version 1 of the proxy protocol because I can't see the binary header preamble. Can someone confirm? https://thomas.glanzmann.de/tmp/nginx.pcap Cheers, Thomas From lists at lazygranch.com Thu Dec 15 23:03:19 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 15 Dec 2016 15:03:19 -0800 Subject: limit_req per subnet? In-Reply-To: <2a56ad507715de7e1001596647f25437.NginxMailingListEnglish@forum.nginx.org> References: <20161215081141.5435470.75362.18442@lazygranch.com> <2a56ad507715de7e1001596647f25437.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161215230319.5435470.43081.18510@lazygranch.com> Here is my philosophy. A packet arrives at your server. This can be broken down into two parts: who are you and what do you want. The firewall does a fine job of stopping the hacker at the who are you point.? When the packet reaches Nginx, the what do you want part comes into play. Most likely nginx will reject it. But all software has bugs, and thus there will be zero days. Thus I rather stop the bad actor at the firewall. ? Original Message ? From: c0nw0nk Sent: Thursday, December 15, 2016 2:23 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: limit_req per subnet? gariac Wrote: ------------------------------------------------------- > This is an interesting bit of code. However if you are being ddos-ed, > this just eliminates nginx from replying. It isn't like nginx is > isolated from the attack. I would still rather block the IP at the > firewall and prevent nginx fr?om doing any action.? > > The use of $bot_agent opens up a lot of possibilities of the value can > be fed to the log file. > ? Original Message ? > From: shiz > Sent: Wednesday, December 14, 2016 5:24 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: limit_req per subnet? > > I've inplemented something based on > https://community.centminmod.com/threads/blocking-bad-or-aggressive-bo > ts.6433/ > > Works perfectly fine for me. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,271483,271535#msg-271535 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Any layer 7 attack that Nginx begins struggling to accept connections is a successful one and at that point should be blocked at a router level. But Nginx handles allot of connections very well hence why the limit_conn and limit_req modules exist because the majority of layer 7 attacks Nginx won't have a problem denying them itself. The bottle necks are backend processes like MySQL, PHP, Python, If they clog up accepting traffic Nginx will run out of connections available to keep serving other requests for different files / paths on the server. http://nginx.org/en/docs/ngx_core_module.html#worker_connections that is the cause to your entire Nginx server going slow / unresponsive at that point even the 503 error and 500x errors won't display, all connections begin to time out and at this point you should block those IP's exhausting Nginx's server connections at a router level since Nginx can no longer cope. Nginx has small footprint in resources used layer 7 based attacks you should only start blocking at a router level when Nginx can no longer handle them fine on its own and begins timing out due to worker_connections getting exhausted. But it is rare that a attack is large enough to exhaust those and you can increase worker_connections and decrease timeout values to fix that easily. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271546#msg-271546 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From emailgrant at gmail.com Thu Dec 15 23:50:54 2016 From: emailgrant at gmail.com (Grant) Date: Thu, 15 Dec 2016 15:50:54 -0800 Subject: limit_req per subnet? In-Reply-To: <0a6401fdb3707f9b0b3da6feecc6c620.NginxMailingListEnglish@forum.nginx.org> References: <0a6401fdb3707f9b0b3da6feecc6c620.NginxMailingListEnglish@forum.nginx.org> Message-ID: > proxy_cache / fastcgi_cache the pages output will help. Flood all you want > Nginx handles flooding and lots of connections fine your back end is your > weakness / bottleneck that is allowing them to be successful in effecting > your service. Definitely. My backend is of course the bottleneck so I'd like nginx to refrain from passing a request on to the backend if it is deemed to be part of a group of requests that should be rate limited. But there doesn't seem to be a good way to do that if the group should contain more than one IP. I think any method that groups requests by UA will require too much human monitoring. - Grant From nginx-forum at forum.nginx.org Fri Dec 16 05:03:00 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 16 Dec 2016 00:03:00 -0500 Subject: limit_req per subnet? In-Reply-To: References: Message-ID: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> That is why you cache the request. DoS or in your case DDoS since multiple are involved Caching backend responses and having Nginx serve a cached response even for 1 second that cached response can be valid for it will save your day. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271483,271580#msg-271580 From nginx-forum at forum.nginx.org Fri Dec 16 05:57:10 2016 From: nginx-forum at forum.nginx.org (xstation) Date: Fri, 16 Dec 2016 00:57:10 -0500 Subject: nginx.conf Message-ID: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> eneted this in the conf file under http SetEnvIfNoCase User-Agent "^Baiduspider" block_bot Order Allow,Deny Allow from All Deny from env=block_bot but on restart got a error message Job for nginx.service failed. See 'systemctl status nginx.service' and 'journalctl -xn' for details. root at mail:~# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271581#msg-271581 From nginx-forum at forum.nginx.org Fri Dec 16 06:05:36 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 16 Dec 2016 01:05:36 -0500 Subject: nginx.conf In-Reply-To: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> xstation Wrote: ------------------------------------------------------- > eneted this in the conf file under http > > SetEnvIfNoCase User-Agent "^Baiduspider" block_bot > Order Allow,Deny > Allow from All > Deny from env=block_bot > > > but on restart got a error message > > Job for nginx.service failed. See 'systemctl status nginx.service' and > 'journalctl -xn' for details. > root at mail:~# That is an config for Apache web server on Nginx you want to do this. Put inside location {} or server {} if ($http_user_agent ~ "^Baiduspider") { return 403; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271582#msg-271582 From nginx-forum at forum.nginx.org Fri Dec 16 06:17:43 2016 From: nginx-forum at forum.nginx.org (xstation) Date: Fri, 16 Dec 2016 01:17:43 -0500 Subject: nginx.conf In-Reply-To: <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> Message-ID: <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> thanks for fast reply# here is what I get root at mail:~# nginx -t -c /etc/nginx/nginx.conf nginx: [emerg] "if" directive is not allowed here in /etc/nginx/nginx.conf:82 nginx: configuration file /etc/nginx/nginx.conf test failed so 'if' should be deleted? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271583#msg-271583 From nginx-forum at forum.nginx.org Fri Dec 16 06:34:47 2016 From: nginx-forum at forum.nginx.org (xstation) Date: Fri, 16 Dec 2016 01:34:47 -0500 Subject: nginx.conf In-Reply-To: <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> If I delete the if! I get an error root at mail:~# nginx -t -c /etc/nginx/nginx.conf nginx: [emerg] unknown directive "($http_user_agent" in /etc/nginx/nginx.conf:82 nginx: configuration file /etc/nginx/nginx.conf test failed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271585#msg-271585 From lists at lazygranch.com Fri Dec 16 06:51:05 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 15 Dec 2016 22:51:05 -0800 Subject: nginx.conf In-Reply-To: <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161216065105.5435470.16416.18525@lazygranch.com> ?Take a look at this: ?http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html Personally, I would use the map feature since eventually there will be other user agents to block. I use three maps. I block based on requests, referrals, and ?user agents. The user agent is kind of obvious. Unwanted referrals is a personal thing. I find some websites linking to me that are pure crap like stumbleupon. I don't want their traffic. Yeah sometimes stumbleupon has a relevant link, but most of the time their links make no sense. Some sites will link to your website for SEO. Some linking is just freakin out there, like when Hamas linked to my site. (Humus I like...Hamas not so much. ) Blocking requests is useful if you want to get the IPs of hackers. I find many requests for the directory "backup."? I even have the Chinese equivalent to backup in my bad request trap. Rather than let them 404, I 444 them, and then check the IP to see if it goes to a hosting company, VPS, VPN, etc. You can't block enough IPs at the firewall in my opinion. Every IP you block that isn't an eyeball, even if harmless today, might be harmful in the future. No eyeballs, no need to view the site. ? Original Message ? From: xstation Sent: Thursday, December 15, 2016 10:35 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: nginx.conf If I delete the if! I get an error root at mail:~# nginx -t -c /etc/nginx/nginx.conf nginx: [emerg] unknown directive "($http_user_agent" in /etc/nginx/nginx.conf:82 nginx: configuration file /etc/nginx/nginx.conf test failed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271585#msg-271585 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Dec 16 06:55:07 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 16 Dec 2016 01:55:07 -0500 Subject: nginx.conf In-Reply-To: <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <29a3fcbaf3a1477f4473963432781b40.NginxMailingListEnglish@forum.nginx.org> Provide your full config please. Also this error log. [emerg] "if" directive is not allowed here That means you put the code I provided in a invalid area I would assume not between location {} or server {} tags as I said. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271586#msg-271586 From forsunyq at gmail.com Fri Dec 16 07:09:14 2016 From: forsunyq at gmail.com (yanqun sun) Date: Fri, 16 Dec 2016 15:09:14 +0800 Subject: What's the meaning of Nginx variables of "$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space" Message-ID: Hi, all: I want to get the network latency between the users and my Nginx servers on tcp layer. I searched about this and found several Nginx variables bellow: $tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space information about the client TCP connection; available on systems that support the TCP_INFO socket option So does anybody can make an explanation for those variables? I think *$tcpinfo_rtt* probably means the round trip time from the client to the server on tcp layer. and what about the rttvar, snd_cwnd and rcv_space? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Fri Dec 16 08:11:35 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 16 Dec 2016 11:11:35 +0300 Subject: What's the meaning of Nginx variables of "$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space" In-Reply-To: References: Message-ID: Hello, On 12/16/16 10:09 AM, yanqun sun wrote: > Hi, all: > I want to get the network latency between the users and my Nginx > servers on tcp layer. I searched about this and found several Nginx > variables bellow: > > |$tcpinfo_rtt|, |$tcpinfo_rttvar|, |$tcpinfo_snd_cwnd|, |$tcpinfo_rcv_space| > information about the client TCP connection; available on > systems that support the |TCP_INFO |socket option > > > So does anybody can make an explanation for those variables? > I think *$tcpinfo_rtt* probably means the round trip time from the > client to the server on tcp layer. > and what about the rttvar, snd_cwnd and rcv_space? > I would suggest to check any tcp 101c-like book for the explanation. -- Maxim Konovalov From maxim at nginx.com Fri Dec 16 08:13:15 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 16 Dec 2016 11:13:15 +0300 Subject: Use nginx ngx_stream_ssl_preread_module to connect to ocserv using proxy protocol v2 In-Reply-To: <20161215213637.GC23728@glanzmann.de> References: <20161215163203.GA23192@glanzmann.de> <20161215213637.GC23728@glanzmann.de> Message-ID: <4fdf57e3-0475-49f4-3b0c-d9cea903efb9@nginx.com> On 12/16/16 12:36 AM, Thomas Glanzmann wrote: > Hello Nikos, > >> Are you sure that the nginx module you are using supports the proxy >> protocol version 2? > > you're probably right. Nginx seems to support only version 1 of the > proxy protocol because I can't see the binary header preamble. Can > someone confirm? > > https://thomas.glanzmann.de/tmp/nginx.pcap > Yes, that's right -- no support for the proxy proto v2. -- Maxim Konovalov From ru at nginx.com Fri Dec 16 08:15:50 2016 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 16 Dec 2016 11:15:50 +0300 Subject: What's the meaning of Nginx variables of "$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space" In-Reply-To: References: Message-ID: <20161216081550.GR63131@lo0.su> On Fri, Dec 16, 2016 at 03:09:14PM +0800, yanqun sun wrote: > Hi, all: > I want to get the network latency between the users and my Nginx servers > on tcp layer. I searched about this and found several Nginx variables > bellow: > > $tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space > information about the client TCP connection; available on systems that > support the TCP_INFO socket option > > > So does anybody can make an explanation for those variables? > I think *$tcpinfo_rtt* probably means the round trip time from the client > to the server on tcp layer. > and what about the rttvar, snd_cwnd and rcv_space? http://linuxgazette.net/136/pfeiffer.html From nginx-forum at forum.nginx.org Fri Dec 16 08:45:18 2016 From: nginx-forum at forum.nginx.org (xstation) Date: Fri, 16 Dec 2016 03:45:18 -0500 Subject: nginx.conf In-Reply-To: <29a3fcbaf3a1477f4473963432781b40.NginxMailingListEnglish@forum.nginx.org> References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> <29a3fcbaf3a1477f4473963432781b40.NginxMailingListEnglish@forum.nginx.org> Message-ID: Here is full conf user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; ($http_user_agent ~ "^Baiduspider") { return 403; } # } # # server { # listen localhost:143; # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; } # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271598#msg-271598 From lists at lazygranch.com Fri Dec 16 09:38:32 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 16 Dec 2016 01:38:32 -0800 Subject: nginx.conf In-Reply-To: References: <1d0756abaa2ed9f323e998c88e254b94.NginxMailingListEnglish@forum.nginx.org> <799daa8322d5cec5afe2e71f2e6e7a67.NginxMailingListEnglish@forum.nginx.org> <15271df44d686b35ae2feb33a01dceb8.NginxMailingListEnglish@forum.nginx.org> <3b16539d963753fb569685a2c97eaa1d.NginxMailingListEnglish@forum.nginx.org> <29a3fcbaf3a1477f4473963432781b40.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161216093832.5435470.77772.18539@lazygranch.com> Are you trying to block baiduspider from your html email?? I think you should review the commented out lines. Very old school, but you may want to just print your conf file and line up curly braces. Perhaps copy the conf file, delete commented lines, and then see if it makes sense. ?It looks to me like the conf file can't be parsed due to mismatches. ? Original Message ? From: xstation Sent: Friday, December 16, 2016 12:45 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: nginx.conf Here is full conf user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; ($http_user_agent ~ "^Baiduspider") { return 403; } # } # # server { # listen localhost:143; # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; } # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271598#msg-271598 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From forsunyq at gmail.com Fri Dec 16 09:38:54 2016 From: forsunyq at gmail.com (yanqun sun) Date: Fri, 16 Dec 2016 17:38:54 +0800 Subject: What's the meaning of Nginx variables of "$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space" In-Reply-To: <20161216081550.GR63131@lo0.su> References: <20161216081550.GR63131@lo0.su> Message-ID: Hi, I will read this article and get understand what tcpinfo_rtt is. Thank you very much! 2016-12-16 16:15 GMT+08:00 Ruslan Ermilov : > On Fri, Dec 16, 2016 at 03:09:14PM +0800, yanqun sun wrote: > > Hi, all: > > I want to get the network latency between the users and my Nginx > servers > > on tcp layer. I searched about this and found several Nginx variables > > bellow: > > > > $tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space > > information about the client TCP connection; available on systems that > > support the TCP_INFO socket option > > > > > > So does anybody can make an explanation for those variables? > > I think *$tcpinfo_rtt* probably means the round trip time from the > client > > to the server on tcp layer. > > and what about the rttvar, snd_cwnd and rcv_space? > > http://linuxgazette.net/136/pfeiffer.html > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Dec 17 13:21:18 2016 From: nginx-forum at forum.nginx.org (xstation) Date: Sat, 17 Dec 2016 08:21:18 -0500 Subject: nginx.conf In-Reply-To: <20161216093832.5435470.77772.18539@lazygranch.com> References: <20161216093832.5435470.77772.18539@lazygranch.com> Message-ID: <73a4ac294383a4c5638bdc4f44d2dde1.NginxMailingListEnglish@forum.nginx.org> thanks for your reply seems to have problems but just have to leave it for time being just comment out the lines refering to spider etc and hopes it restarts. many thanks for sugestions try to brush up on my knowledage base Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271581,271628#msg-271628 From arut at nginx.com Mon Dec 19 11:13:07 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 19 Dec 2016 14:13:07 +0300 Subject: ngx_stream_ssl_preread_module does not seem to extract the server_name when connecting with openconnect In-Reply-To: <20161215212629.GB23728@glanzmann.de> References: <20161215160841.GA21203@glanzmann.de> <20161215162019.GP35892@Romans-MacBook-Air.local> <20161215165048.GA23728@glanzmann.de> <20161215172216.GQ35892@Romans-MacBook-Air.local> <20161215212629.GB23728@glanzmann.de> Message-ID: <20161219111307.GA7572@Romans-MacBook-Air.local> Hi Thomas, On Thu, Dec 15, 2016 at 10:26:29PM +0100, Thomas Glanzmann wrote: > Hello Roman, > > > Please try the attached patch. > > thank you for the patch. The patch solves my SNI problem: > > 185.46.137.5 - [15/Dec/2016:22:25:00 +0100] "vpn.gmvl.de" Committed, thanks. http://hg.nginx.org/nginx/rev/01adb18a5d23 -- Roman Arutyunyan From tjlp at sina.com Tue Dec 20 09:21:42 2016 From: tjlp at sina.com (tjlp at sina.com) Date: Tue, 20 Dec 2016 17:21:42 +0800 Subject: How to config nginx to write the log to log file and standard output? Message-ID: <20161220092142.80EC4B000CB@webmail.sinamail.sina.com.cn> Hi, Usually nginx writes log to access.log and error.log. How can I config nginx to write the log to these 2 log files and standard output? Thanks Liu Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.mauri at sap.com Tue Dec 20 18:31:46 2016 From: richard.mauri at sap.com (Mauri, Richard) Date: Tue, 20 Dec 2016 18:31:46 +0000 Subject: nginx timeout aborting subsequent proxying from upstream block In-Reply-To: References: Message-ID: <9876EED6-E487-4D88-B084-07B6FD00BCB3@sap.com> This is question about configuring nginx so that when you have multiple servers in an upstream block and the first one selected to handle a request "times out" (at default 60s) that we can skip the forwarding to subseqnt servers in the upstream block. We see the case where the upstream_response_time in nginx log shows like 60s,60s. We have a client that aborts the request/connection at 20sec and other clients that ultimately fail because the server responded with failure with the 60s,60s upstream_response time. We want to institutionalize the aggregate 20s round trip client request time at the server side if possible. In other words we don't want to configure the proxy read and write timeout settings to 20 as this might result in total of 40s as observed by the client. We don't want to set the seetings to like 10s as that may not give the server enough time to complete processing. Rather we want the proxy read write timeouts to be 20s and the entire request to fail immediately without going to the next server in the upstream block. How is nginx configured so if the first upstream server times-out; then subsequent servers are not consulted and the server effectively timeout at 20? I hope this makes sense Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 20 19:50:49 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Dec 2016 22:50:49 +0300 Subject: nginx timeout aborting subsequent proxying from upstream block In-Reply-To: <9876EED6-E487-4D88-B084-07B6FD00BCB3@sap.com> References: <9876EED6-E487-4D88-B084-07B6FD00BCB3@sap.com> Message-ID: <20161220195049.GS18639@mdounin.ru> Hello! On Tue, Dec 20, 2016 at 06:31:46PM +0000, Mauri, Richard wrote: > > This is question about configuring nginx so that when you have > multiple servers in an upstream block and the first one selected > to handle a request "times out" (at default 60s) that we can > skip the forwarding to subseqnt servers in the upstream block. > > We see the case where the upstream_response_time in nginx log > shows like 60s,60s. > > We have a client that aborts the request/connection at 20sec and > other clients that ultimately fail because the server responded > with failure with the 60s,60s upstream_response time. > > We want to institutionalize the aggregate 20s round trip client > request time at the server side if possible. > > In other words we don't want to configure the proxy read and > write timeout settings to 20 as this might result in total of > 40s as observed by the client. > > We don't want to set the seetings to like 10s as that may not > give the server enough time to complete processing. > > Rather we want the proxy read write timeouts to be 20s and the > entire request to fail immediately without going to the next > server in the upstream block. > > How is nginx configured so if the first upstream server > times-out; then subsequent servers are not consulted and the > server effectively timeout at 20? Try this: proxy_next_upstream_timeout 20s; See http://nginx.org/r/proxy_next_upstream_timeout for additional details. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Dec 21 07:11:18 2016 From: nginx-forum at forum.nginx.org (HipHopServers) Date: Wed, 21 Dec 2016 02:11:18 -0500 Subject: Issue playing back multiple RTMP live streams In-Reply-To: <005a01d1de7f$8c254280$a46fc780$@matechco.com> References: <005a01d1de7f$8c254280$a46fc780$@matechco.com> Message-ID: <6d317083072f7f125f0f03f12b0edda5.NginxMailingListEnglish@forum.nginx.org> You seem to have gotten NGiNX module to work with RTSP but I have not been able to replicate your results on a single stream. Can you post a complete copy of your configuration file without any secure information. Did you manage to resolve the issue with the multiple streams issue as you described. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268298,271664#msg-271664 From nginx-forum at forum.nginx.org Wed Dec 21 10:19:55 2016 From: nginx-forum at forum.nginx.org (tmuesele) Date: Wed, 21 Dec 2016 05:19:55 -0500 Subject: Nginx authentication based on parameterized url Message-ID: <6bfd15a430160511c3e1831387cd0bb8.NginxMailingListEnglish@forum.nginx.org> Hi there, I need an authentication based on a parameterized class call in a url. For example the url: https://sample.com/index.php?cl=accesstestprivate should be access-able by IP address 192.168.1.1, if the request doesnt come from this IP, a basic auth should be invoked. All other / pages e.g. index.php, index.php?start=1 should be access-able by public. I was trying to use the map function. But in this case, the site is not available from public. map $arg_cl $auth_type { default ?off"; "accesstestprivate? "closed"; } location / { satisfy any; allow 192.168.1.1; auth_basic $auth_type; auth_basic_user_file conf/htpasswd; proxy_pass http://devserver; } Any ideas? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271665,271665#msg-271665 From nginx-forum at forum.nginx.org Wed Dec 21 11:21:58 2016 From: nginx-forum at forum.nginx.org (eessaouira) Date: Wed, 21 Dec 2016 06:21:58 -0500 Subject: Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) In-Reply-To: <8ade3052a301c32ef13284d025a40502.NginxMailingListEnglish@forum.nginx.org> References: <8ade3052a301c32ef13284d025a40502.NginxMailingListEnglish@forum.nginx.org> Message-ID: did you find anhy answer ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,127854,271666#msg-271666 From moskovcak at gmail.com Wed Dec 21 15:23:33 2016 From: moskovcak at gmail.com (=?UTF-8?B?SmnFmcOtIE1vc2tvdsSNw6Fr?=) Date: Wed, 21 Dec 2016 15:23:33 +0000 Subject: What exactly worker_connections are not enough means? Message-ID: Hi, I'm trying to use nginx as a udp loadbalancer. I got to ~40k pkt/s with my experiments, but occasionally seeing alert message in the log saying: XXXX worker_connections are not enough. Atm I'm using 65000 worker_connections and still seeing the message - roughly 1 message per minute. netstat doesn't show any UDP recv or send errors so it seems like there is no packet loss related to that log message, but I'd like someone to help me understand what that means and if I can tune something to get rid of it. Thank you, Jirka -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Dec 22 11:32:56 2016 From: nginx-forum at forum.nginx.org (hemendra26) Date: Thu, 22 Dec 2016 06:32:56 -0500 Subject: nginx x-accel-redirect request method named location In-Reply-To: References: Message-ID: Hi Ruslan (not sure), This works great.. It will do for my use case. Thanks a lot. Regards, Hemendra Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271448,271675#msg-271675 From nginx-forum at forum.nginx.org Thu Dec 22 13:24:13 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Thu, 22 Dec 2016 08:24:13 -0500 Subject: Allow caching of *some* filetypes? Message-ID: <202a2fd311ae969baab3b6dd180e9aa3.NginxMailingListEnglish@forum.nginx.org> For security purposes, we utilize the Cache-Control "no-cache, no-store, must-revalidate" add_header parameter in our root location block. However, I'd like to tweak this to allow the following file types to be cached: jpg|jpeg|png|gif|ico|js|css|html I added this above my root location / block, but it breaks all images and our css as well. location ~* \.(jpg|jpeg|png|gif|ico|js|css|html)$ { expires 7d; } Here is our root location block also: location / { add_header X-Frame-Options SAMEORIGIN; add_header Strict-Transport-Security max-age=63072000; add_header Cache-Control "no-cache, no-store, must-revalidate"; add_header Pragma "no-cache"; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_read_timeout 120s; proxy_next_upstream error timeout invalid_header http_404 http_500; proxy_intercept_errors on; proxy_pass http://my_proxy; } Am I doing this correctly? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271676,271676#msg-271676 From nginx-forum at forum.nginx.org Thu Dec 22 13:51:42 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Thu, 22 Dec 2016 08:51:42 -0500 Subject: Allow caching of *some* filetypes? In-Reply-To: <202a2fd311ae969baab3b6dd180e9aa3.NginxMailingListEnglish@forum.nginx.org> References: <202a2fd311ae969baab3b6dd180e9aa3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4106d04009ac16697aec167573b353a6.NginxMailingListEnglish@forum.nginx.org> I figured it out. I just needed to add the proxy+pass in the new location block. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271676,271677#msg-271677 From ru at nginx.com Thu Dec 22 15:06:55 2016 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 22 Dec 2016 18:06:55 +0300 Subject: disable 301 redirect for directory / use relative redirect / change scheme In-Reply-To: References: Message-ID: <20161222150655.GE54250@lo0.su> On Wed, Aug 26, 2015 at 05:30:37PM +0200, Etienne Champetier wrote: > Hi, > > I have this setup > browser -> ssl proxy -> nginx > browser to ssl proxy is https only > ssl proxy to nginx is http only > > now i browse to "https://exemple.com/aaa", where aaa is a directory, > so nginx send back a 301 redirect with "Location: http://exemple.com/aaa/" > > Is it possible to send https instead of http (in Location), > or send a relative header, like "Location: /aaa/", > or just disable this redirection (in my case it's ok) JFYI, we've just implemented relative redirects support: http://hg.nginx.org/nginx/rev/d15172ebb400 It'll be available in the upcoming 1.11.8 release. Its use is controlled by the new directive "absolute_redirect". Stay tuned. From nginx-forum at forum.nginx.org Thu Dec 22 16:55:00 2016 From: nginx-forum at forum.nginx.org (ahamilton9) Date: Thu, 22 Dec 2016 11:55:00 -0500 Subject: Fastcgi_pass, resolver, and validating functionality. In-Reply-To: <20161208135743.GG18639@mdounin.ru> References: <20161208135743.GG18639@mdounin.ru> Message-ID: Almost. It was static group in the respect that it was dynamically grabbed on load, but the resolver just wasn't triggering at all after that. We did not have an explicit group defined, "php" literally referred to the DNS name of the load balancer it was contacting. The rest of the domain was being taken care of by the local resolver (search domain). It looks like the code changed in nginx to completely rely on it's own DNS checking at some point rather than what the local system was reporting, which would explain why it suddenly stopped working after a version change when nothing else had been modified. We changed the line from "php:9000" to the FQDN like "php.internal.ourdomain.com:9000", and the nginx resolver started triggering. To anyone who isn't using an FQDN and stumbles upon this, try that first. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271373,271681#msg-271681 From mdounin at mdounin.ru Thu Dec 22 17:51:10 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 22 Dec 2016 20:51:10 +0300 Subject: Fastcgi_pass, resolver, and validating functionality. In-Reply-To: References: <20161208135743.GG18639@mdounin.ru> Message-ID: <20161222175109.GH18639@mdounin.ru> Hello! On Thu, Dec 22, 2016 at 11:55:00AM -0500, ahamilton9 wrote: > Almost. It was static group in the respect that it was dynamically grabbed > on load, but the resolver just wasn't triggering at all after that. > > We did not have an explicit group defined, "php" literally referred to the > DNS name of the load balancer it was contacting. The rest of the domain was > being taken care of by the local resolver (search domain).It looks like the > code changed in nginx to completely rely on it's own DNS checking at some > point rather than what the local system was reporting, which would explain > why it suddenly stopped working after a version change when nothing else had > been modified. System name resolution is only used when nginx parses configuration on startup / reconfiguration. All name resolution during runtime uses DNS servers configured with the "resolver" directive. And this is how it worked always, because system name resolution is blocking and can't be used by nginx during runtime. > We changed the line from "php:9000" to the FQDN like > "php.internal.ourdomain.com:9000", and the nginx resolver started > triggering. To anyone who isn't using an FQDN and stumbles upon this, try > that first. Changing a name to a different one just makes sure that no other implicitly configured upstream group will interfere with the name used. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Dec 24 00:26:17 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 23 Dec 2016 19:26:17 -0500 Subject: Naxsi Nginx High performance WAF Message-ID: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> So I recently got hooked on Naxsi and I am loving it to bits <3 thanks to itpp2012 :) https://github.com/nbs-system/naxsi I found the following Rule sets here. http://spike.nginx-goodies.com/rules/ But I am curious does anyone have Naxsi written rules that would be the same as/on Cloudflare's WAF ? These to be exact : Package: OWASP ModSecurity Core Rule Set : Covers OWASP Top 10 vulnerabilities, and more. Package: Cloudflare Rule Set : Contains rules to stop attacks commonly seen on Cloudflare's network and attacks against popular applications. Love to have a Naxsi version of their WAF rules to add in to the naxsi_core.rules file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271695#msg-271695 From rpaprocki at fearnothingproductions.net Sat Dec 24 05:47:45 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 23 Dec 2016 21:47:45 -0800 Subject: Naxsi Nginx High performance WAF In-Reply-To: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> References: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3DDDB4C3-DB32-481B-B1CE-C4087E241692@fearnothingproductions.net> Naxsi and ModSecurity are... very different. They have distinct (and largely incomparable) backgrounds, philosophies, goals, implementation details, and, most importantly for this context, vastly different DSLs that support their operations. A 1-1 translation of the OWASP CRS (particularly v3, just recently released) from ModSecurity's rule language to Naxsi rule syntax just isn't possible. ModSecurity provides a number of features that are either unsupported or impossible in Naxsi, and given that the CRS was written explicitly for ModSec, taking advantage of some implantation-specific features... well, good luck ;) (and at this point you might as well use libmodsecurity or an openresty alternative like lua-resty-waf, as Naxsi is probably never going to support the operators and feature sets needed for the CRS). As for CFs rules, I'm not 100% sure, but that essentially sounds like asking for access to CFs internal data pipeline. I doubt you'll find a published version of this, as it's data that powers their commercial WAF. > On Dec 23, 2016, at 16:26, c0nw0nk wrote: > > So I recently got hooked on Naxsi and I am loving it to bits <3 thanks to > itpp2012 :) > > https://github.com/nbs-system/naxsi > > I found the following Rule sets here. > > http://spike.nginx-goodies.com/rules/ > > But I am curious does anyone have Naxsi written rules that would be the same > as/on Cloudflare's WAF ? > > These to be exact : > Package: > OWASP ModSecurity Core Rule Set : Covers OWASP Top 10 vulnerabilities, and > more. > Package: > Cloudflare Rule Set : Contains rules to stop attacks commonly seen on > Cloudflare's network and attacks against popular applications. > > > Love to have a Naxsi version of their WAF rules to add in to the > naxsi_core.rules file. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271695#msg-271695 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Dec 24 08:09:02 2016 From: nginx-forum at forum.nginx.org (mex) Date: Sat, 24 Dec 2016 03:09:02 -0500 Subject: Naxsi Nginx High performance WAF In-Reply-To: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> References: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7e41bde8363de4e9f8b52e2c7c1916c6.NginxMailingListEnglish@forum.nginx.org> Hi c0nw0nk, mex here, inital creator of http://spike.nginx-goodies.com/rules/ and maintainer of Doxi-Rules https://bitbucket.org/lazy_dogtown/doxi-rules/overview (this us where the rules live we create with spike :) the doxi-rules in its current state are inspired by emerging threats rules, and not by the CRS-System because: - mod_security can hook into any phase of a request, while naxsi only works in access_phase - naxsi has a very slim but yet powerfull core-ruleset - naxsi doesnt hold state of an actor thus, it would not be possible to re-create the CRS onto naxsi, instead, we have a very slim but very fast core-ruleset that does not change very often, and ontop of this, if wanted a wider ruleset that protect against common classes of attacks like XXE or generel Object-Injections http://spike.nginx-goodies.com/rules/view/42000341 http://spike.nginx-goodies.com/rules/view/42000343 i learned from my gurus @emerging threats ti write signatures against vulnerabilities, not exploits before naxsi i used mod_security with CRS as well and it was more tha just PITA becaause of False Positives and performance-issues as well. with naxsdi, learning mode and whitelist-creation using a WAF is fun again. If you have detailed questions about naxsi, there is a naxsi-discuss-mailinglist as well cheers, mex c0nw0nk Wrote: ------------------------------------------------------- > So I recently got hooked on Naxsi and I am loving it to bits <3 thanks > to itpp2012 :) > > https://github.com/nbs-system/naxsi > > I found the following Rule sets here. > > http://spike.nginx-goodies.com/rules/ > > But I am curious does anyone have Naxsi written rules that would be > the same as/on Cloudflare's WAF ? > > These to be exact : > Package: > OWASP ModSecurity Core Rule Set : Covers OWASP Top 10 vulnerabilities, > and more. > Package: > Cloudflare Rule Set : Contains rules to stop attacks commonly seen on > Cloudflare's network and attacks against popular applications. > > > Love to have a Naxsi version of their WAF rules to add in to the > naxsi_core.rules file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271697#msg-271697 From nginx-forum at forum.nginx.org Mon Dec 26 23:37:52 2016 From: nginx-forum at forum.nginx.org (pdh0710) Date: Mon, 26 Dec 2016 18:37:52 -0500 Subject: Cannot set cookies when using error_page directive, why? Message-ID: error_page 400 401 402 403 404 500 502 503 504 /err.html; location = /err.html { root /var/www; add_header Set-Cookie "error_response=${status}; path=/;"; } ========== (Please excuse my English) Above is a part of my 'nginx.conf'. My purpose is... If error occurred, client browser gets 'err.html' with error_response=$status cookie. The 'err.html' has JavaScript codes that handles the error_response cookie and display related error messages. However, when I tried "http://test.domain.com/not_exist.html" and other urls make error, the client browser got 'err.html' without error_response cookie. But when I tried "http://test.domain.com/err.html" directly, client browser got error_response cookie successfully. So, I concluded Nginx does not pass cookies when using 'error_page' directive. Is it a Nginx bug? Or intentionally blocked? Why? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271704,271704#msg-271704 From r1ch+nginx at teamliquid.net Tue Dec 27 00:23:46 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 27 Dec 2016 01:23:46 +0100 Subject: Cannot set cookies when using error_page directive, why? In-Reply-To: References: Message-ID: See http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header "Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307." If the always parameter is specified (1.7.5), the header field will be added regardless of the response code. On Tue, Dec 27, 2016 at 12:37 AM, pdh0710 wrote: > error_page 400 401 402 403 404 500 502 503 504 /err.html; > > location = /err.html { > root /var/www; > add_header Set-Cookie "error_response=${status}; path=/;"; > } > > ========== > > (Please excuse my English) > > Above is a part of my 'nginx.conf'. My purpose is... If error occurred, > client browser gets > 'err.html' with error_response=$status cookie. The 'err.html' has > JavaScript > codes that > handles the error_response cookie and display related error messages. > > However, when I tried "http://test.domain.com/not_exist.html" and other > urls > make error, > the client browser got 'err.html' without error_response cookie. > But when I tried "http://test.domain.com/err.html" directly, client > browser > got error_response > cookie successfully. > So, I concluded Nginx does not pass cookies when using 'error_page' > directive. > > Is it a Nginx bug? > Or intentionally blocked? Why? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271704,271704#msg-271704 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 27 12:11:00 2016 From: nginx-forum at forum.nginx.org (heijiu) Date: Tue, 27 Dec 2016 07:11:00 -0500 Subject: Nginx Core dump(corrupted double-linked list) Message-ID: <151c7341b0254dc747d268bd220a6b09.NginxMailingListEnglish@forum.nginx.org> gdb /opt/nginx/sbin/nginx /tmp/core.29382 GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /opt/nginx/sbin/nginx...done. [New LWP 29382] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `nginx: worker process'. Program terminated with signal 6, Aborted. #0 0x00007f72fa7f41d7 in __GI_raise (sig=sig at entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 56 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.13.2-12.el7_2.x86_64 libcom_err-1.42.9-7.el7.x86_64 libgcc-4.8.5-4.el7.x86_64 libselinux-2.5-6.el7.x86_64 nss-softokn-freebl-3.16.2.3-14.2.el7_2.x86_64 openssl-libs-1.0.1e-51.el7_2.7.x86_64 pcre-8.32-15.el7_2.1.x86_64 zlib-1.2.7-15.el7.x86_64 (gdb) backtrace full #0 0x00007f72fa7f41d7 in __GI_raise (sig=sig at entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 resultvar = 0 pid = 29382 selftid = 29382 #1 0x00007f72fa7f58c8 in __GI_abort () at abort.c:90 save_stage = 2 act = {__sigaction_handler = {sa_handler = 0x7fff690e197a, sa_sigaction = 0x7fff690e197a}, sa_mask = {__val = {6, 140131806992327, 2, 140734955919758, 2, 140131806983591, 1, 140131806992323, 3, 140734955919732, 12, 140131806992327, 2, 140734955920544, 140734955920544, 140734955922304}}, sa_flags = 23, sa_restorer = 0x7fff690e1ea0} sigs = {__val = {32, 0 }} #2 0x00007f72fa833f07 in __libc_message (do_abort=do_abort at entry=2, fmt=fmt at entry=0x7f72fa93eb48 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196 ap = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7fff690e2390, reg_save_area = 0x7fff690e22a0}} ap_copy = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 0x7fff690e2390, reg_save_area = 0x7fff690e22a0}} fd = 2 on_2 = list = nlist = cp = written = #3 0x00007f72fa839da4 in malloc_printerr (action=, str=0x7f72fa93c1f7 "corrupted double-linked list", ptr=, ar_ptr=) at malloc.c:5013 buf = "0000000001604b80" cp = ar_ptr = ptr = str = 0x7f72fa93c1f7 "corrupted double-linked list" action = #4 0x00007f72fa83b595 in _int_free (av=0x7f72fab79760 , p=0x1602b70, have_lock=0) at malloc.c:3993 size = 8208 fb = nextchunk = 0x1604b80 nextsize = 608 nextinuse = prevsize = ---Type to continue, or q to quit--- bck = fwd = errstr = 0x0 locked = #5 0x0000000000413fa5 in ngx_destroy_pool (pool=0x169a730) at src/core/ngx_palloc.c:90 p = 0x1602b80 n = 0x0 l = 0x0 c = 0x0 #6 0x0000000000470260 in ngx_http_free_request (r=0x169a780, rc=0) at src/http/ngx_http_request.c:3494 log = 0x1605420 pool = 0x169a730 linger = {l_onoff = 1762534672, l_linger = 32767} cln = 0x0 ctx = 0x16054c8 clcf = 0x16044b0 #7 0x000000000046edfa in ngx_http_set_keepalive (r=0x169a780) at src/http/ngx_http_request.c:2896 tcp_nodelay = 32767 i = 4830762 b = 0x153b120 f = 0x7fff690e2540 rev = 0x15c9d70 wev = 0x169a780 c = 0x1596de0 hc = 0x1605480 cscf = 0x0 clcf = 0x155f2f0 #8 0x000000000046e2a9 in ngx_http_finalize_connection (r=0x169a780) at src/http/ngx_http_request.c:2545 clcf = 0x155f2f0 #9 0x000000000046def5 in ngx_http_finalize_request (r=0x169a780, rc=0) at src/http/ngx_http_request.c:2441 c = 0x1596de0 pr = 0x1 ---Type to continue, or q to quit--- clcf = 0x0 #10 0x000000000048b2d2 in ngx_http_upstream_finalize_request (r=0x169a780, u=0x169b8f0, rc=0) at src/http/ngx_http_upstream.c:4217 flush = 0 #11 0x000000000048a360 in ngx_http_upstream_process_request (r=0x169a780, u=0x169b8f0) at src/http/ngx_http_upstream.c:3806 tf = 0x15cac10 p = 0x169bd60 #12 0x000000000048a09e in ngx_http_upstream_process_upstream (r=0x169a780, u=0x169b8f0) at src/http/ngx_http_upstream.c:3733 rev = 0x15cac10 p = 0x169bd60 c = 0x1599000 #13 0x00000000004887d4 in ngx_http_upstream_send_response (r=0x169a780, u=0x169b8f0) at src/http/ngx_http_upstream.c:3001 tcp_nodelay = 0 n = 7382200 rc = 0 p = 0x169bd60 c = 0x1596de0 clcf = 0x155f2f0 #14 0x00000000004868f1 in ngx_http_upstream_process_header (r=0x169a780, u=0x169b8f0) at src/http/ngx_http_upstream.c:2190 n = 2168 rc = 0 c = 0x1599000 #15 0x0000000000484566 in ngx_http_upstream_handler (ev=0x15cac10) at src/http/ngx_http_upstream.c:1117 c = 0x1596de0 r = 0x169a780 u = 0x169b8f0 #16 0x000000000044d9fe in ngx_epoll_process_events (cycle=0x1536c30, timer=500, flags=1) at src/event/modules/ngx_epoll_module.c:822 events = 1 revents = 8197 instance = 0 i = 0 level = 4462210 err = 0 ---Type to continue, or q to quit--- rev = 0x15cac10 wev = 0x15e1cc0 queue = 0x717f80 c = 0x1599000 #17 0x000000000043db8d in ngx_process_events_and_timers (cycle=0x1536c30) at src/event/ngx_event.c:242 flags = 1 timer = 500 delta = 1482827685782 #18 0x000000000044b547 in ngx_worker_process_cycle (cycle=0x1536c30, data=0x1) at src/os/unix/ngx_process_cycle.c:753 worker = 1 #19 0x0000000000447e43 in ngx_spawn_process (cycle=0x1536c30, proc=0x44b452 , data=0x1, name=0x4e2ef3 "worker process", respawn=1) at src/os/unix/ngx_process.c:198 on = 1 pid = 0 s = 1 #20 0x000000000044aff3 in ngx_reap_children (cycle=0x1536c30) at src/os/unix/ngx_process_cycle.c:621 i = 1 n = 8 live = 1 ch = {command = 2, pid = 22219, slot = 1, fd = -1} ccf = 0x716d24 #21 0x0000000000449c10 in ngx_master_process_cycle (cycle=0x1536c30) at src/os/unix/ngx_process_cycle.c:174 title = 0x158bd1c "master process /opt/nginx/sbin/nginx" p = 0x158bd40 "" size = 37 i = 1 n = 140734955924112 sigio = 0 set = {__val = {0 }} itv = {it_interval = {tv_sec = 140734955924112, tv_usec = 140131829646774}, it_value = {tv_sec = 14, tv_usec = 25}} live = 1 delay = 0 ls = 0x0 ---Type to continue, or q to quit--- ccf = 0x1538738 #22 0x00000000004101d9 in main (argc=1, argv=0x7fff690e2df8) at src/core/nginx.c:367 b = 0x0 log = 0x715880 i = 0 cycle = 0x1536c30 init_cycle = {conf_ctx = 0x0, pool = 0x1535f50, log = 0x715880 , new_log = {log_level = 0, file = 0x0, connection = 0, disk_full_time = 0, handler = 0x0, data = 0x0, writer = 0x0, wdata = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0, files = 0x0, free_connections = 0x0, free_connection_n = 0, modules = 0x0, modules_n = 0, modules_used = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, config_dump = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 26, data = 0x1535fa0 "\002"}, conf_param = {len = 0, data = 0x0}, conf_prefix = { len = 16, data = 0x1535fa0 "\002"}, prefix = {len = 11, data = 0x4de686 "/opt/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}} cd = 0x4dda40 <__libc_csu_init> ccf = 0x1538738 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271720,271720#msg-271720 From mdounin at mdounin.ru Tue Dec 27 13:20:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Dec 2016 16:20:43 +0300 Subject: Nginx Core dump(corrupted double-linked list) In-Reply-To: <151c7341b0254dc747d268bd220a6b09.NginxMailingListEnglish@forum.nginx.org> References: <151c7341b0254dc747d268bd220a6b09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161227132043.GS18639@mdounin.ru> Hello! On Tue, Dec 27, 2016 at 07:11:00AM -0500, heijiu wrote: > gdb /opt/nginx/sbin/nginx /tmp/core.29382 > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7 > Copyright (C) 2013 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-redhat-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /opt/nginx/sbin/nginx...done. > [New LWP 29382] > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > Core was generated by `nginx: worker process'. > Program terminated with signal 6, Aborted. [...] What "nginx -V" shows? What's in the configuration? Hint: the very first step is to make sure the problem is not introduced by a 3rd party module. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Dec 27 14:38:14 2016 From: nginx-forum at forum.nginx.org (pdh0710) Date: Tue, 27 Dec 2016 09:38:14 -0500 Subject: Cannot set cookies when using error_page directive, why? In-Reply-To: References: Message-ID: Thank you Richard. I fixed the problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271704,271745#msg-271745 From mdounin at mdounin.ru Tue Dec 27 14:39:58 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Dec 2016 17:39:58 +0300 Subject: nginx-1.11.8 Message-ID: <20161227143957.GW18639@mdounin.ru> Changes with nginx 1.11.8 27 Dec 2016 *) Feature: the "absolute_redirect" directive. *) Feature: the "escape" parameter of the "log_format" directive. *) Feature: client SSL certificates verification in the stream module. *) Feature: the "ssl_session_ticket_key" directive supports AES256 encryption of TLS session tickets when used with 80-byte keys. *) Feature: vim-commentary support in vim scripts. Thanks to Armin Grodon. *) Bugfix: recursion when evaluating variables was not limited. *) Bugfix: in the ngx_stream_ssl_preread_module. *) Bugfix: if a server in an upstream in the stream module failed, it was considered alive only when a test connection sent to it after fail_timeout was closed; now a successfully established connection is enough. *) Bugfix: nginx/Windows could not be built with 64-bit Visual Studio. *) Bugfix: nginx/Windows could not be built with OpenSSL 1.1.0. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Dec 28 04:37:14 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 27 Dec 2016 23:37:14 -0500 Subject: nginx-1.11.8 In-Reply-To: <20161227143957.GW18639@mdounin.ru> References: <20161227143957.GW18639@mdounin.ru> Message-ID: thanks Maxim working nicely here ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271747,271754#msg-271754 From nginx-forum at forum.nginx.org Wed Dec 28 10:25:49 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 28 Dec 2016 05:25:49 -0500 Subject: cache file is too small Message-ID: Hi, I am using nginx 1.10.2. We are getting continuous error as 'cache file is too small': 2016/12/28 15:49:38 [crit] 55253#55253: cache file "/cache/12003/2/c3/4ab93b7d3126d7f7f79487c6dc9dbc32.0579642579" is too small Below is the config line set for 12003 : proxy_cache_path /cache/12003 keys_zone=a12003:200m levels=1:2 max_size=700g inactive=10d; Kindly let me know how can we eliminate this error. We are using nginx as web service to handle media traffic. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271756,271756#msg-271756 From mdounin at mdounin.ru Wed Dec 28 13:05:50 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Dec 2016 16:05:50 +0300 Subject: cache file is too small In-Reply-To: References: Message-ID: <20161228130549.GD18639@mdounin.ru> Hello! On Wed, Dec 28, 2016 at 05:25:49AM -0500, omkar_jadhav_20 wrote: > Hi, > > I am using nginx 1.10.2. We are getting continuous error as 'cache file is > too small': > 2016/12/28 15:49:38 [crit] 55253#55253: cache file > "/cache/12003/2/c3/4ab93b7d3126d7f7f79487c6dc9dbc32.0579642579" is too > small > > Below is the config line set for 12003 : > proxy_cache_path /cache/12003 keys_zone=a12003:200m levels=1:2 > max_size=700g inactive=10d; > > Kindly let me know how can we eliminate this error. > We are using nginx as web service to handle media traffic. The message suggests that you are trying to use cache directory from nginx 1.11.6+ with use_temp_path=off. There are temporary files left in the cache directories, and the older version doesn't recognize them and complains. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Thu Dec 29 00:16:06 2016 From: emailgrant at gmail.com (Grant) Date: Wed, 28 Dec 2016 16:16:06 -0800 Subject: limit_req per subnet? In-Reply-To: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: > That is why you cache the request. DoS or in your case DDoS since multiple > are involved Caching backend responses and having Nginx serve a cached > response even for 1 second that cached response can be valid for it will > save your day. That would be a big project because it would mean rewriting some of the functionality of my backend. I'm looking for something that can be implemented independently of the backend, but that doesn't seem to exist in nginx. - Grant From nginx-forum at forum.nginx.org Thu Dec 29 05:50:36 2016 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Thu, 29 Dec 2016 00:50:36 -0500 Subject: cache file is too small In-Reply-To: <20161228130549.GD18639@mdounin.ru> References: <20161228130549.GD18639@mdounin.ru> Message-ID: Hi , I am using nginx running with version 1.10.2 Also could you please suggest what is the permanent solution for this also what does exactly use_temp_path does? Should we keep it explicitly on , we have not set this directive in our nginx.conf. Below is sample nginx.conf for your reference , kindly suggest wherever modification is required : worker_processes auto; events { worker_connections 4096; use epoll; multi_accept on; } worker_rlimit_nofile 100001; http { include mime.types; default_type video/mp4; proxy_buffering on; proxy_buffer_size 4096k; proxy_buffers 5 4096k; sendfile on; keepalive_timeout 30; tcp_nodelay on; tcp_nopush on; reset_timedout_connection on; gzip off; server_tokens off; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271756,271762#msg-271762 From francis at daoine.org Thu Dec 29 10:47:01 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Dec 2016 10:47:01 +0000 Subject: How do I rewrite files, but only, if they are in one special folder? In-Reply-To: References: <20161212211801.GI2958@daoine.org> Message-ID: <20161229104701.GL2958@daoine.org> On Tue, Dec 13, 2016 at 01:29:17PM -0500, Joergi wrote: Hi there, > > rewrite ^/([^/]*)\.php5 /$1.php permanent; > > rewrite ^(/wiki/[^/]*\.php)5 $1 permanent; > > You may want to restrict these to the locations that match their > > prefixes, depending on what else is happening. > > What do you mean? The "main folder", which you are influencing with your > first rule, is the web root: /home/$username/www/. The foder wiki is the > subfolder in there. Any recommendation what location I should add around the > two rewrites or if I should add one? Depending on what else is in your config, it may be useful to put the second rewrite within a "location ^~ /wiki/ {}" block. But if you don't measure a difference when putting it there, in a "location ~ \.php5 {}" block, or at server{] level, then it does not matter in your deployment. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 29 10:56:12 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Dec 2016 10:56:12 +0000 Subject: setting up a forward proxy for a few specific website only, and block the rest In-Reply-To: <3f0f51cb7b56872941141b9c7732214a.NginxMailingListEnglish@forum.nginx.org> References: <3f0f51cb7b56872941141b9c7732214a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161229105612.GM2958@daoine.org> On Wed, Dec 14, 2016 at 01:09:32AM -0500, toffs.hl wrote: Hi there, > 1) Setup a nginx forward proxy, nginx is not a (forward) proxy. If you want to make it be one, you will have significant coding to do. The rest of what you want sounds like it should be straightforwardly available in any reasonable web proxy server. So you'll probably be much happier starting with a proxy server, and then configuring it to do what you want. > and this particular proxy server will only > accept the proxy connection based on destination website That should be in the proxy server config. > 2) I will setup this proxy server in cloud server provider That is up to you. > 3) I will need to create a PAC file, and let my users to use this particular > proxy PAC file for traffic re-direction, user will have to configure their > browser to use proxy PAC file. That is up to the browser configuration. > 4) Whenever my users (that are using the PAC file) trying to access to the > above 5 website, regardless of using HTTP or HTTPS, the proxy PAC file will > get the traffic flow through my PROYX_AAA server That is up to the browser to handle the PAC file contents correctly. > 5) I also need to configure the PROXY_AAA to proxy for the above 5 website > only That is the same as point 1), and is the proxy server configuration. > 6) Proxy connection based on source IP address is not possible, as the users > IP is dynamic That is also the proxy server configuration; although "do not limit by source IP" is probably the default configuration. > So would like to ask anyone has configure such config in nginx before ? How > do I configure the nginx as forward proxy, to block all proxy request, and > allow only the few website that I want to proxy ? Probably not; you don't, because nginx is not a proxy server. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 29 10:59:20 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Dec 2016 10:59:20 +0000 Subject: rewrite cycle In-Reply-To: <20161215131855.GK2958@daoine.org> References: <20161215131855.GK2958@daoine.org> Message-ID: <20161229105920.GN2958@daoine.org> On Thu, Dec 15, 2016 at 01:18:55PM +0000, Francis Daly wrote: > On Thu, Dec 15, 2016 at 02:30:17PM +0300, ?????? ??????? wrote: Hi there, You got the right answer from Maxim. I had missed that > > location ~ ^/([A-Z_].*)$ { > > rewrite ^/(.*)$ /bin/view/$1; > > } that location was intended to skip "starts with lower case letter". Sorry about that. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 29 11:18:36 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Dec 2016 11:18:36 +0000 Subject: limit_req per subnet? In-Reply-To: References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161229111836.GO2958@daoine.org> On Wed, Dec 28, 2016 at 04:16:06PM -0800, Grant wrote: Hi there, > I'm looking for something that can > be implemented independently of the backend, but that doesn't seem to > exist in nginx. http://nginx.org/r/limit_req_zone You can define the "key" any way that you want. Perhaps you can create something using "geo". Perhaps you want "the first three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 address, rounded down to a multiple of 8". Perhaps you want something else. The exact thing that you want, probably does not exist. The tools that are needed to create it, probably do exist. All that seems to be missing is the incentive for someone to actually do the work to build a thing that you would like to exist. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Dec 29 11:44:38 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Dec 2016 11:44:38 +0000 Subject: Nginx authentication based on parameterized url In-Reply-To: <6bfd15a430160511c3e1831387cd0bb8.NginxMailingListEnglish@forum.nginx.org> References: <6bfd15a430160511c3e1831387cd0bb8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161229114438.GP2958@daoine.org> On Wed, Dec 21, 2016 at 05:19:55AM -0500, tmuesele wrote: Hi there, > I need an authentication based on a parameterized class call in a url. For > example the url: > > https://sample.com/index.php?cl=accesstestprivate > > should be access-able by IP address 192.168.1.1, if the request doesnt come > from this IP, a basic auth should be invoked. > > All other / pages e.g. index.php, index.php?start=1 should be access-able by > public. Your sample config makes it look like: * 192.168.1.1 can access anything * any other address can access anything unless it has cl=accesstestprivate in the query string * if the request has cl=accesstestprivate in the query string, then most clients are challenged for basic authentication > I was trying to use the map function. But in this case, the site is not > available from public. It seems to work for me, when I make sure to only use the " double-quote character in nginx.conf. > map $arg_cl $auth_type { > default ?off"; > "accesstestprivate? "closed"; > } > > location / { > satisfy any; > allow 192.168.1.1; > auth_basic $auth_type; > auth_basic_user_file conf/htpasswd; > proxy_pass http://devserver; > } > Any ideas? What failure do you see? As in: what request do you make, what response do you get, what response do you want instead? Is there anything in the error log? (I did see "open() "/usr/local/nginx/conf/conf/htpasswd" failed (2: No such file or directory)" in my error log, until I changed the auth_basic_user_file directive. But perhaps you have the matching directory structure already.) f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Dec 29 14:05:16 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Dec 2016 17:05:16 +0300 Subject: cache file is too small In-Reply-To: References: <20161228130549.GD18639@mdounin.ru> Message-ID: <20161229140516.GG18639@mdounin.ru> Hello! On Thu, Dec 29, 2016 at 12:50:36AM -0500, omkar_jadhav_20 wrote: > I am using nginx running with version 1.10.2 > Also could you please suggest what is the permanent solution for this also The message suggests the cache directory was used with nginx 1.11.6+. If this is not something you did intentionally in the past - you may want to investigate how this happened. Easiest solution is to remove all the cache contents and let nginx to re-populate the cache correctly. Also make sure no other nginx instances are using the same cache directory. > what does exactly use_temp_path does? > Should we keep it explicitly on , we have not set this directive in our > nginx.conf. The "use_temp_path" parameter is to control how temporary files are stored. You don't need to touch it unless you understand it is beneficial in your particular setup. -- Maxim Dounin http://nginx.org/ From emailgrant at gmail.com Thu Dec 29 16:09:33 2016 From: emailgrant at gmail.com (Grant) Date: Thu, 29 Dec 2016 08:09:33 -0800 Subject: limit_req per subnet? In-Reply-To: <20161229111836.GO2958@daoine.org> References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> <20161229111836.GO2958@daoine.org> Message-ID: >> I'm looking for something that can >> be implemented independently of the backend, but that doesn't seem to >> exist in nginx. > > http://nginx.org/r/limit_req_zone > > You can define the "key" any way that you want. > > Perhaps you can create something using "geo". Perhaps you want "the first > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 > address, rounded down to a multiple of 8". Perhaps you want something > else. So I'm sure I understand, none of the functionality described above exists currently? - Grant > The exact thing that you want, probably does not exist. > > The tools that are needed to create it, probably do exist. > > All that seems to be missing is the incentive for someone to actually > do the work to build a thing that you would like to exist. From emailgrant at gmail.com Fri Dec 30 12:30:22 2016 From: emailgrant at gmail.com (Grant) Date: Fri, 30 Dec 2016 04:30:22 -0800 Subject: limit_req per subnet? In-Reply-To: References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> <20161229111836.GO2958@daoine.org> Message-ID: >>> I'm looking for something that can >>> be implemented independently of the backend, but that doesn't seem to >>> exist in nginx. >> >> http://nginx.org/r/limit_req_zone >> >> You can define the "key" any way that you want. >> >> Perhaps you can create something using "geo". Perhaps you want "the first >> three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 >> address, rounded down to a multiple of 8". Perhaps you want something >> else. > > > So I'm sure I understand, none of the functionality described above > exists currently? Or can it be configured without hacking the nginx core? - Grant >> The exact thing that you want, probably does not exist. >> >> The tools that are needed to create it, probably do exist. >> >> All that seems to be missing is the incentive for someone to actually >> do the work to build a thing that you would like to exist. From kworthington at gmail.com Fri Dec 30 13:29:17 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 30 Dec 2016 08:29:17 -0500 Subject: [nginx-announce] nginx-1.11.8 In-Reply-To: <20161227144004.GX18639@mdounin.ru> References: <20161227144004.GX18639@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.8 for Windows https://kevinworthington.com/ nginxwin1118 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 27, 2016 at 9:40 AM, Maxim Dounin wrote: > Changes with nginx 1.11.8 27 Dec > 2016 > > *) Feature: the "absolute_redirect" directive. > > *) Feature: the "escape" parameter of the "log_format" directive. > > *) Feature: client SSL certificates verification in the stream module. > > *) Feature: the "ssl_session_ticket_key" directive supports AES256 > encryption of TLS session tickets when used with 80-byte keys. > > *) Feature: vim-commentary support in vim scripts. > Thanks to Armin Grodon. > > *) Bugfix: recursion when evaluating variables was not limited. > > *) Bugfix: in the ngx_stream_ssl_preread_module. > > *) Bugfix: if a server in an upstream in the stream module failed, it > was considered alive only when a test connection sent to it after > fail_timeout was closed; now a successfully established connection > is > enough. > > *) Bugfix: nginx/Windows could not be built with 64-bit Visual Studio. > > *) Bugfix: nginx/Windows could not be built with OpenSSL 1.1.0. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Dec 31 10:36:51 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 31 Dec 2016 10:36:51 +0000 Subject: limit_req per subnet? In-Reply-To: References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> <20161229111836.GO2958@daoine.org> Message-ID: <20161231103651.GQ2958@daoine.org> On Thu, Dec 29, 2016 at 08:09:33AM -0800, Grant wrote: Hi there, > >> I'm looking for something that can > >> be implemented independently of the backend, but that doesn't seem to > >> exist in nginx. > > > > http://nginx.org/r/limit_req_zone > > > > You can define the "key" any way that you want. > > > > Perhaps you can create something using "geo". Perhaps you want "the first > > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 > > address, rounded down to a multiple of 8". Perhaps you want something > > else. > > > So I'm sure I understand, none of the functionality described above > exists currently? A variable with exactly the value that you want it to have, probably does not exist currently in the stock nginx code. The code that allows you to create a variable with exactly the value that you want it to have, probably does exist in the stock nginx code. You can use "geo", "map", "set", or (probably) any of the extension languages to give the variable the value that you want it to have. For example: map $binary_remote_addr $bin_slash16 { "~^(?P..)..$" "$a"; } will probably come close to making $bin_slash16 hold a binary representation of the first two octets of the connecting ip address. (You'll want to confirm whether "dot" matches "any byte" in your regex engine; or whether you can make it match "any byte" (specifically including the byte that normally represents newline); before you trust that fully, of course.) If you don't like map with regex, you can use "geo" with a (long) list of networks, to set your new variable to whatever value you want. Good luck with it, f -- Francis Daly francis at daoine.org