From dewanggaba at xtremenitro.org Mon Aug 1 08:20:54 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 1 Aug 2016 15:20:54 +0700 Subject: Load balancing algorithm Message-ID: <753a2a7b-fb96-250f-8c83-169d3d9dade2@xtremenitro.org> Hello! I got curios with load balancing algorithm, I got scenarios like this. I have 3 galera cluster, each cluster have 3 node and it was solved with stream module. Cluster1: Node1-Cluster1: 192.168.11.1 Node2-Cluster1: 192.168.11.2 Node3-Cluster1: 192.168.11.3 Cluster2: Node1-Cluster2: 192.168.12.1 Node2-Cluster2: 192.168.12.2 Node3-Cluster2: 192.168.12.3 Cluster3: Node1-Cluster3: 192.168.13.1 Node2-Cluster3: 192.168.13.2 Node3-Cluster3: 192.168.13.3 Cluster4: Node1-Cluster1: 192.168.11.1 Node1-Cluster4: 192.168.14.1 But, I want to build another cluster and wont sync with galera cluster. Is there any alternative instead of roundrobind, least_connected and ip_hash? I want the algorithm is connect to backend at the sametime, looks like: upstream Cluster4{ server 192.168.11.1:3306 max_fails=0; server 192.168.14.1:3306 max_fails=0; } Since it's not synced, the query to upstream "Cluster4" should be not round robin or asynchronous to backend, it should be simultaneously. Is it possible? If yes, which method should be picked up? From ahutchings at nginx.com Mon Aug 1 09:19:59 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 1 Aug 2016 10:19:59 +0100 Subject: Load balancing algorithm In-Reply-To: <753a2a7b-fb96-250f-8c83-169d3d9dade2@xtremenitro.org> References: <753a2a7b-fb96-250f-8c83-169d3d9dade2@xtremenitro.org> Message-ID: <466ead26-3765-789a-4208-df02be48b110@nginx.com> Hi Dewangga, I'm not quite sure what your desired outcome would be by connecting to two servers at the same time for a single client but it won't work the way you might think it will. I think what you might be looking for is the 'backup' keyword. With this NGINX will only fall back to that server if the primary servers are unavailable. Using 'backup' is typically recommended for a Galera cluster anyway to avoid deadlocks at the commit sync due to the same table in multiple servers being written to at the same time. See this blog post for more information: https://www.nginx.com/blog/advanced-mysql-load-balancing-with-nginx-plus/ Kind Regards Andrew On 01/08/16 09:20, Dewangga Bachrul Alam wrote: > Hello! > > I got curios with load balancing algorithm, I got scenarios like this. > > I have 3 galera cluster, each cluster have 3 node and it was solved with > stream module. > > Cluster1: > Node1-Cluster1: 192.168.11.1 > Node2-Cluster1: 192.168.11.2 > Node3-Cluster1: 192.168.11.3 > > Cluster2: > Node1-Cluster2: 192.168.12.1 > Node2-Cluster2: 192.168.12.2 > Node3-Cluster2: 192.168.12.3 > > Cluster3: > Node1-Cluster3: 192.168.13.1 > Node2-Cluster3: 192.168.13.2 > Node3-Cluster3: 192.168.13.3 > > Cluster4: > Node1-Cluster1: 192.168.11.1 > Node1-Cluster4: 192.168.14.1 > > But, I want to build another cluster and wont sync with galera cluster. > Is there any alternative instead of roundrobind, least_connected and > ip_hash? > > I want the algorithm is connect to backend at the sametime, looks like: > > upstream Cluster4{ > server 192.168.11.1:3306 max_fails=0; > server 192.168.14.1:3306 max_fails=0; > } > > Since it's not synced, the query to upstream "Cluster4" should be not > round robin or asynchronous to backend, it should be simultaneously. > > Is it possible? If yes, which method should be picked up? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From dewanggaba at xtremenitro.org Mon Aug 1 09:36:54 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 1 Aug 2016 16:36:54 +0700 Subject: Load balancing algorithm In-Reply-To: <466ead26-3765-789a-4208-df02be48b110@nginx.com> References: <753a2a7b-fb96-250f-8c83-169d3d9dade2@xtremenitro.org> <466ead26-3765-789a-4208-df02be48b110@nginx.com> Message-ID: <2aa1d236-6799-0127-24ac-731aea11e6cc@xtremenitro.org> Hello! On 08/01/2016 04:19 PM, Andrew Hutchings wrote: > Hi Dewangga, > > I'm not quite sure what your desired outcome would be by connecting to > two servers at the same time for a single client but it won't work the > way you might think it will. My main goal is, want to build real-time backup mysql server, so the client will be sql insert to different machine at the same time. Got it? So the scenario will be like this, if anything goes wrong at the cluster, I can check it in cluster4-node1 and pull the data then restore to the cluster. > > I think what you might be looking for is the 'backup' keyword. With this > NGINX will only fall back to that server if the primary servers are > unavailable. > > Using 'backup' is typically recommended for a Galera cluster anyway to > avoid deadlocks at the commit sync due to the same table in multiple > servers being written to at the same time. See this blog post for more > information: > > https://www.nginx.com/blog/advanced-mysql-load-balancing-with-nginx-plus/ > > Kind Regards > Andrew > > On 01/08/16 09:20, Dewangga Bachrul Alam wrote: >> Hello! >> >> I got curios with load balancing algorithm, I got scenarios like this. >> >> I have 3 galera cluster, each cluster have 3 node and it was solved with >> stream module. >> >> Cluster1: >> Node1-Cluster1: 192.168.11.1 >> Node2-Cluster1: 192.168.11.2 >> Node3-Cluster1: 192.168.11.3 >> >> Cluster2: >> Node1-Cluster2: 192.168.12.1 >> Node2-Cluster2: 192.168.12.2 >> Node3-Cluster2: 192.168.12.3 >> >> Cluster3: >> Node1-Cluster3: 192.168.13.1 >> Node2-Cluster3: 192.168.13.2 >> Node3-Cluster3: 192.168.13.3 >> >> Cluster4: >> Node1-Cluster1: 192.168.11.1 >> Node1-Cluster4: 192.168.14.1 >> >> But, I want to build another cluster and wont sync with galera cluster. >> Is there any alternative instead of roundrobind, least_connected and >> ip_hash? >> >> I want the algorithm is connect to backend at the sametime, looks like: >> >> upstream Cluster4{ >> server 192.168.11.1:3306 max_fails=0; >> server 192.168.14.1:3306 max_fails=0; >> } >> >> Since it's not synced, the query to upstream "Cluster4" should be not >> round robin or asynchronous to backend, it should be simultaneously. >> >> Is it possible? If yes, which method should be picked up? >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > From ahutchings at nginx.com Mon Aug 1 11:41:40 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Mon, 1 Aug 2016 12:41:40 +0100 Subject: Load balancing algorithm In-Reply-To: <2aa1d236-6799-0127-24ac-731aea11e6cc@xtremenitro.org> References: <753a2a7b-fb96-250f-8c83-169d3d9dade2@xtremenitro.org> <466ead26-3765-789a-4208-df02be48b110@nginx.com> <2aa1d236-6799-0127-24ac-731aea11e6cc@xtremenitro.org> Message-ID: Hi Dewangga, On 01/08/16 10:36, Dewangga Bachrul Alam wrote: > Hello! > > On 08/01/2016 04:19 PM, Andrew Hutchings wrote: >> Hi Dewangga, >> >> I'm not quite sure what your desired outcome would be by connecting to >> two servers at the same time for a single client but it won't work the >> way you might think it will. > > My main goal is, want to build real-time backup mysql server, so the > client will be sql insert to different machine at the same time. Got it? > So the scenario will be like this, if anything goes wrong at the > cluster, I can check it in cluster4-node1 and pull the data then restore > to the cluster. For this to happen NGINX would need to be able to understand MySQL's binary protocol rather than relaying packets and there are several conditions that would make this very complex (such as an insert succeeding on one server and failing on another). It would be much better to have this logic in your application layer or in the database class your application layer uses. Alternatively active-passive clustering with async or semi-sync replication and keepalived may fit your use case. Or you could use Galera for this cluster too. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Mon Aug 1 13:42:21 2016 From: nginx-forum at forum.nginx.org (piotrek84) Date: Mon, 01 Aug 2016 09:42:21 -0400 Subject: reponse 500 content Message-ID: Hello, I have a reverse proxy setup in which nginx serves as load balancer for 2 simmiliar java backends. The java applications are supposed to return a XML based content. The problem is, that when an error occurs in the java application, it returns a response with with error code 500 and some XML payload with stack trace, but nginx doesn't forward the stack trace. It just returns to the client: 500 Internal Server Error

500 Internal Server Error


nginx/1.10.0
Is there something I can do to forward the rest of the app response to client? There is a requirement to do so. nginx configuration below: user nginx; worker_processes 2; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; proxy_intercept_errors off; include /etc/nginx/sites-enabled/*; } and included file: upstream upstream-dc { server xxx:9001; server xxx:9001; } server { listen 9001; server_name domain; access_log /var/log/nginx/access-drools.log; error_log /var/log/nginx/error-drools.log; location / { proxy_pass http://upstream-dc; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_next_upstream error timeout invalid_header http_500; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_send_timeout 120; client_max_body_size 25m; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268678,268678#msg-268678 From mikydevel at yahoo.fr Mon Aug 1 23:25:46 2016 From: mikydevel at yahoo.fr (Mik J) Date: Mon, 1 Aug 2016 23:25:46 +0000 (UTC) Subject: Can't log x-forwarded-for References: <699552618.13896500.1470093946293.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <699552618.13896500.1470093946293.JavaMail.yahoo@mail.yahoo.com> nginx version: nginx/1.9.10 Hello,I'm trying to log the client IP within the x-forwarded-for field. http://nginx.org/en/docs/http/ngx_http_log_module.html The only problem is that I think I followed the instructions correctly but nginx won't start# nginx nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/default:8 in nginx.conf I havehttp { ??? include?????? mime.types; ??? include?????? /etc/nginx/sites-enabled/*; ??? include?????? /etc/nginx/conf.d/*; ??? default_type? application/octet-stream; ??? index???????? index.html index.htm; ?? log_format?? main??? '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';... In sites-available/default I haveaccess_log /var/log/nginx/default.access.log main; <= This is line 8 error_log /var/log/nginx/default.error.log main; i think I did things good but it's like the line in nginx.conf is not taken into account Do you know why ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Aug 2 06:10:45 2016 From: steve at greengecko.co.nz (steve) Date: Tue, 2 Aug 2016 18:10:45 +1200 Subject: transferring site from apache.... Message-ID: <57A03965.9090906@greengecko.co.nz> Hi folks. This is a new one on me. I'm migrating a site from apache, and it's got some minification so that js and css files are presented as: as an example... Anyone seen this before, and have an idea how to address it to get it working?? Unsurprisingly I'm getting a 404 at the moment Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From francis at daoine.org Tue Aug 2 08:00:34 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Aug 2016 09:00:34 +0100 Subject: Can't log x-forwarded-for In-Reply-To: <699552618.13896500.1470093946293.JavaMail.yahoo@mail.yahoo.com> References: <699552618.13896500.1470093946293.JavaMail.yahoo.ref@mail.yahoo.com> <699552618.13896500.1470093946293.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20160802080034.GE12280@daoine.org> On Mon, Aug 01, 2016 at 11:25:46PM +0000, Mik J wrote: Hi there, > nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/default:8 > > in nginx.conf I havehttp { > ??? include?????? mime.types; > ??? include?????? /etc/nginx/sites-enabled/*; > ??? include?????? /etc/nginx/conf.d/*; > ??? default_type? application/octet-stream; > ??? index???????? index.html index.htm; > ?? log_format?? main??? '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';... Look at the order of config lines there. You have, effectively, http { server { access_log /tmp/out.log main; } log_format main '$remote_addr $request'; } and nginx complains that "main" is not defined at the "access_log" line. > i think I did things good but it's like the line in nginx.conf is not taken into account > Do you know why ? If you put the "log_format" line before the "server" block (i.e., before the appropriate "include" line), nginx will probably be happier. Arguably, since "log_format" is at http-level only, and you can't repeat "name", nginx *could* be able to handle things out of order. But right now it doesn't, so the quick fix is to re-order things yourself. Cheers, f -- Francis Daly francis at daoine.org From chino.aureus at gmail.com Tue Aug 2 09:36:26 2016 From: chino.aureus at gmail.com (Chino Aureus) Date: Tue, 2 Aug 2016 17:36:26 +0800 Subject: Proxy Buffer Sizing Message-ID: Hi NGINX Community, Seeking for your help on the two items related to proxy buffers: 1) Would like to ask guidance how to properly size the ff: proxy_buffers proxy_buffer_size proxy_busy_buffers_size The reason I'm setting this up is due to an error wherein NGINX complain large headers. Did google searching on guideline / recommended practice to compute optimal value for the above directives but so far I haven't found any. 2) How soon are the responses in the proxy buffer sent to the client ? Will it cause delay example if it's waiting for the buffer to get full? Thanks, Chino -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Tue Aug 2 11:23:46 2016 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 2 Aug 2016 11:23:46 +0000 (UTC) Subject: Can't log x-forwarded-for In-Reply-To: <20160802080034.GE12280@daoine.org> References: <699552618.13896500.1470093946293.JavaMail.yahoo.ref@mail.yahoo.com> <699552618.13896500.1470093946293.JavaMail.yahoo@mail.yahoo.com> <20160802080034.GE12280@daoine.org> Message-ID: <761049654.439119.1470137026155.JavaMail.yahoo@mail.yahoo.com> Hello Francis, Thank you very much, it works better, however it works only for access_log for some reason nginx.conf http {log_format?? main??? '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; sites-available/defaultserver {...access_log /var/log/nginx/default.access.log main;error_log /var/log/nginx/default.error.log; Witherror_log /var/log/nginx/default.error.log main; Nginx fails to start# nginx nginx: [emerg] invalid log level "main" in /etc/nginx/sites-enabled/default:13 The documentation only talks about access_log. Do you think it's normal ? Le Mardi 2 ao?t 2016 10h00, Francis Daly a ?crit : On Mon, Aug 01, 2016 at 11:25:46PM +0000, Mik J wrote: Hi there, > nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/default:8 > > in nginx.conf I havehttp { > ??? include?????? mime.types; > ??? include?????? /etc/nginx/sites-enabled/*; > ??? include?????? /etc/nginx/conf.d/*; > ??? default_type? application/octet-stream; > ??? index???????? index.html index.htm; > ?? log_format?? main??? '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"';... Look at the order of config lines there. You have, effectively, ? http { ? ? server { ? ? ? access_log /tmp/out.log main; ? ? } ? ? log_format main '$remote_addr $request'; ? } and nginx complains that "main" is not defined at the "access_log" line. > i think I did things good but it's like the line in nginx.conf is not taken into account > Do you know why ? If you put the "log_format" line before the "server" block (i.e., before the appropriate "include" line), nginx will probably be happier. Arguably, since "log_format" is at http-level only, and you can't repeat "name", nginx *could* be able to handle things out of order. But right now it doesn't, so the quick fix is to re-order things yourself. Cheers, ??? f -- Francis Daly? ? ? ? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Aug 2 12:27:52 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 2 Aug 2016 13:27:52 +0100 Subject: Can't log x-forwarded-for In-Reply-To: <761049654.439119.1470137026155.JavaMail.yahoo@mail.yahoo.com> References: <699552618.13896500.1470093946293.JavaMail.yahoo.ref@mail.yahoo.com> <699552618.13896500.1470093946293.JavaMail.yahoo@mail.yahoo.com> <20160802080034.GE12280@daoine.org> <761049654.439119.1470137026155.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20160802122752.GF12280@daoine.org> On Tue, Aug 02, 2016 at 11:23:46AM +0000, Mik J wrote: Hi there, > Thank you very much, it works better, however it works only for access_log for some reason log_format defines a format that access_log uses. http://nginx.org/r/access_log > nginx.conf > http {log_format?? main??? '$remote_addr forwarded for $http_x_real_ip - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; > sites-available/defaultserver {...access_log /var/log/nginx/default.access.log main;error_log /var/log/nginx/default.error.log; > Witherror_log /var/log/nginx/default.error.log main; http://nginx.org/r/error_log error_log takes and optional "level", but does not take a "format". > Nginx fails to start# nginx > nginx: [emerg] invalid log level "main" in /etc/nginx/sites-enabled/default:13 > The documentation only talks about access_log. Do you think it's normal ? Yes, that piece is acting as it should. Cheers, f -- Francis Daly francis at daoine.org From idefix at fechner.net Tue Aug 2 15:55:13 2016 From: idefix at fechner.net (Matthias Fechner) Date: Tue, 2 Aug 2016 17:55:13 +0200 Subject: Auth_digest not working In-Reply-To: <20160731225343.GC57459@mdounin.ru> References: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> <20160731225343.GC57459@mdounin.ru> Message-ID: <3e466571-f8a4-343d-da8b-ffdd5c15bcb6@fechner.net> Am 01.08.2016 um 00:53 schrieb Maxim Dounin: > The auth_digest module is a 3rd party one. And the message > suggests there is a bug in it, or it's not compatible with the > current version of nginx. > > You may consider using an official module instead, auth_basic. > See here for details: thanks, with auth_basic it is working. But as auth_basic transfers the password I would really prefer to have digest running which is much more secure. Are there plans the get the auth-digest into nginx core? Thanks Matthias -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning." -- Rich Cook From nginx-forum at forum.nginx.org Tue Aug 2 18:49:48 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 02 Aug 2016 14:49:48 -0400 Subject: transferring site from apache.... In-Reply-To: <57A03965.9090906@greengecko.co.nz> References: <57A03965.9090906@greengecko.co.nz> Message-ID: Sounds like this add-on: https://github.com/alibaba/nginx-http-concat Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268683,268699#msg-268699 From aamte at petabi.com Tue Aug 2 18:52:49 2016 From: aamte at petabi.com (Amita Shirish Amte) Date: Tue, 2 Aug 2016 11:52:49 -0700 Subject: Fwd: Adding dynamic library to nginx module References: <3A5268F4-7B7E-428C-9504-A6079515AE2E@petabi.com> Message-ID: > Begin forwarded message: > > From: Amita Shirish Amte > Subject: Adding dynamic library to nginx module > Date: August 2, 2016 at 11:19:24 AM PDT > To: nginx at nginx.org > > Hi, > > My name is Amita and I am newbie in using nginx. I am writing a dynamic nginx http module which needs to link to a dynamic library. Currently, I have the following config file : > > ngx_addon_name=ngx_http_remake_module > CORE_LIBS="$CORE_LIBS -L /usr/local/lib/libtest_web.dylib" > if test -n "$ngx_module_link"; then > ngx_module_type=HTTP > ngx_module_name=$ngx_addon_name > ngx_module_incs= > ngx_module_deps= > ngx_module_srcs="$ngx_addon_dir/ngx_http_remake_module.c" > ngx_module_libs= > . auto/module > else > HTTP_MODULES="$HTTP_MODULES ngx_http_remake_module" > NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_remake_module.c" > fi > > When I run sudo make, I get the following clang error: clang: error: no such file or directory: ?libtest_web.dylib?, kindly let me know how exactly should I load the dynamic library or where should it be located so that nginx will automatically link it to the module. > > Thanks for the help and time. > > Regards, > Amita -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Aug 2 22:20:22 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 03 Aug 2016 00:20:22 +0200 Subject: transferring site from apache.... In-Reply-To: References: <57A03965.9090906@greengecko.co.nz> Message-ID: <409619991b61bf0e23e9aab034b3fbf7@none.at> Hi. Am 02-08-2016 20:49, schrieb itpp2012: > Sounds like this add-on: https://github.com/alibaba/nginx-http-concat Really! @OT: Why not just use one line per css file?! > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,268683,268699#msg-268699 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Aug 3 12:31:05 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Aug 2016 15:31:05 +0300 Subject: Auth_digest not working In-Reply-To: <3e466571-f8a4-343d-da8b-ffdd5c15bcb6@fechner.net> References: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> <20160731225343.GC57459@mdounin.ru> <3e466571-f8a4-343d-da8b-ffdd5c15bcb6@fechner.net> Message-ID: <20160803123105.GL57459@mdounin.ru> Hello! On Tue, Aug 02, 2016 at 05:55:13PM +0200, Matthias Fechner wrote: > Am 01.08.2016 um 00:53 schrieb Maxim Dounin: > > The auth_digest module is a 3rd party one. And the message > > suggests there is a bug in it, or it's not compatible with the > > current version of nginx. > > > > You may consider using an official module instead, auth_basic. > > See here for details: > > thanks, with auth_basic it is working. > > But as auth_basic transfers the password I would really prefer to have > digest running which is much more secure. To protect passwords in transit consider using SSL/TLS instead. > Are there plans the get the auth-digest into nginx core? No. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Wed Aug 3 15:38:36 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 3 Aug 2016 20:38:36 +0500 Subject: NGINX http-secure-link 403 !! Message-ID: Hi, We've configured nginx --with-http_secure_link_module to secure the mp4 links. Currently we're testing it with very basic settings. Following is brief explanation of our lab : A test.mp4 file is located under directory /tunefiles/files/test.mp4 . Our objective is to access this file over secure link such as http://192.168.1.192/files/test.mp4?md5=XXXXXX&expire=2314444 . Here is the config of http-secure-link : http://pastebin.com/N41WQASj We've constructed the md5 & expire using following commands : #expiry date is 31st december $date -d "2016-12-31 23:59" +%s 1470240179 #md5 $echo -n '1470240179/files/test.mp4 secret' | openssl md5 -binary | openssl base64 | tr +/ -_ | tr -d = fY8Iyuqah9coPxTDk-UvVg Once everything is constructed, we loaded the following URL into the browser & encountered the error 403: http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 This is what we got in error-log : 2016/08/03 19:58:27 [error] 1227#1227: *1 open() "/etc/nginx/html/favicon.ico" failed (2: No such file or directory), client: 192.168.1.12, server: _, request: "GET /favicon.ico HTTP/1.1", host: "192.168.1.192", referrer: " http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 " ====================================================================== We're unable to get rid of this 403 so far & need help on where we're doing wrong ? Thanks for help in advance !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Aug 3 16:21:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Aug 2016 19:21:43 +0300 Subject: NGINX http-secure-link 403 !! In-Reply-To: References: Message-ID: <20160803162143.GT57459@mdounin.ru> Hello! On Wed, Aug 03, 2016 at 08:38:36PM +0500, shahzaib mushtaq wrote: > Hi, > > We've configured nginx --with-http_secure_link_module to secure the mp4 > links. Currently we're testing it with very basic settings. Following is > brief explanation of our lab : > > A test.mp4 file is located under directory /tunefiles/files/test.mp4 . Our > objective is to access this file over secure link such as > > http://192.168.1.192/files/test.mp4?md5=XXXXXX&expire=2314444 . > > Here is the config of http-secure-link : > > http://pastebin.com/N41WQASj > > We've constructed the md5 & expire using following commands : > > #expiry date is 31st december > $date -d "2016-12-31 23:59" +%s > 1470240179 > > #md5 > $echo -n '1470240179/files/test.mp4 secret' | openssl md5 -binary | openssl > base64 | tr +/ -_ | tr -d = > fY8Iyuqah9coPxTDk-UvVg > > > Once everything is constructed, we loaded the following URL into the > browser & encountered the error 403: > > http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 You are using "expire=" in the request, but "$arg_expires" in the configuration. Note the trailing "s". -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Wed Aug 3 16:30:33 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 3 Aug 2016 21:30:33 +0500 Subject: NGINX http-secure-link 403 !! In-Reply-To: <20160803162143.GT57459@mdounin.ru> References: <20160803162143.GT57459@mdounin.ru> Message-ID: Hi, Thanks for response though i already had fixed this mistake, during copy/paste the commands on this forum made a typo. Here you can see i've created date + md5 but still 403 error : http://prntscr.com/c1690d On Wed, Aug 3, 2016 at 9:21 PM, Maxim Dounin wrote: > Hello! > > On Wed, Aug 03, 2016 at 08:38:36PM +0500, shahzaib mushtaq wrote: > > > Hi, > > > > We've configured nginx --with-http_secure_link_module to secure the mp4 > > links. Currently we're testing it with very basic settings. Following is > > brief explanation of our lab : > > > > A test.mp4 file is located under directory /tunefiles/files/test.mp4 . > Our > > objective is to access this file over secure link such as > > > > http://192.168.1.192/files/test.mp4?md5=XXXXXX&expire=2314444 . > > > > Here is the config of http-secure-link : > > > > http://pastebin.com/N41WQASj > > > > We've constructed the md5 & expire using following commands : > > > > #expiry date is 31st december > > $date -d "2016-12-31 23:59" +%s > > 1470240179 > > > > #md5 > > $echo -n '1470240179/files/test.mp4 secret' | openssl md5 -binary | > openssl > > base64 | tr +/ -_ | tr -d = > > fY8Iyuqah9coPxTDk-UvVg > > > > > > Once everything is constructed, we loaded the following URL into the > > browser & encountered the error 403: > > > > > http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 > > You are using "expire=" in the request, but "$arg_expires" in the > configuration. Note the trailing "s". > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Aug 3 17:07:53 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 3 Aug 2016 19:07:53 +0200 Subject: Configuring nginx for both static pages and fcgi simultaneously In-Reply-To: <20160731235044.GF57459@mdounin.ru> References: <20160731231529.GD57459@mdounin.ru> <20160731235044.GF57459@mdounin.ru> Message-ID: I disagree: it is a good feature to check for script file existence before calling PHP on it with something like: try_files [...] =404; It helps mitigating attacks by avoiding to pave the way to undue files being interpreted. That only works if the filesystem containing PHP scripts is accessible from nginx aswell, ofc. --- *B. R.* On Mon, Aug 1, 2016 at 1:50 AM, Maxim Dounin wrote: > Hello! > > On Mon, Aug 01, 2016 at 01:38:29AM +0200, Richard Stanway wrote: > > > Are you sure you don't want to use try_files for this? > > If a required handling is known in advance there is no need to use > try_files and waste resources on it. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Aug 3 17:17:40 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 3 Aug 2016 22:17:40 +0500 Subject: NGINX http-secure-link 403 !! In-Reply-To: References: <20160803162143.GT57459@mdounin.ru> Message-ID: H, Can you please help to fix it ? expire= is already fixed but issue still persists. Regards. Shahzaib On Wed, Aug 3, 2016 at 9:30 PM, shahzaib mushtaq wrote: > Hi, > > Thanks for response though i already had fixed this mistake, during > copy/paste the commands on this forum made a typo. Here you can see i've > created date + md5 but still 403 error : > > http://prntscr.com/c1690d > > On Wed, Aug 3, 2016 at 9:21 PM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Aug 03, 2016 at 08:38:36PM +0500, shahzaib mushtaq wrote: >> >> > Hi, >> > >> > We've configured nginx --with-http_secure_link_module to secure the mp4 >> > links. Currently we're testing it with very basic settings. Following is >> > brief explanation of our lab : >> > >> > A test.mp4 file is located under directory /tunefiles/files/test.mp4 . >> Our >> > objective is to access this file over secure link such as >> > >> > http://192.168.1.192/files/test.mp4?md5=XXXXXX&expire=2314444 . >> > >> > Here is the config of http-secure-link : >> > >> > http://pastebin.com/N41WQASj >> > >> > We've constructed the md5 & expire using following commands : >> > >> > #expiry date is 31st december >> > $date -d "2016-12-31 23:59" +%s >> > 1470240179 >> > >> > #md5 >> > $echo -n '1470240179/files/test.mp4 secret' | openssl md5 -binary | >> openssl >> > base64 | tr +/ -_ | tr -d = >> > fY8Iyuqah9coPxTDk-UvVg >> > >> > >> > Once everything is constructed, we loaded the following URL into the >> > browser & encountered the error 403: >> > >> > >> http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 >> >> You are using "expire=" in the request, but "$arg_expires" in the >> configuration. Note the trailing "s". >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Aug 3 19:20:45 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Thu, 4 Aug 2016 00:20:45 +0500 Subject: NGINX http-secure-link 403 !! In-Reply-To: References: <20160803162143.GT57459@mdounin.ru> Message-ID: Looks like, its working now. Added root directive under server {} section & link got loaded into the browser though 403 still occurs in terminal when calling with curl (not much to worry about i guess) . On Wed, Aug 3, 2016 at 10:17 PM, shahzaib mushtaq wrote: > H, > > Can you please help to fix it ? expire= is already fixed but issue still > persists. > > Regards. > Shahzaib > > On Wed, Aug 3, 2016 at 9:30 PM, shahzaib mushtaq > wrote: > >> Hi, >> >> Thanks for response though i already had fixed this mistake, during >> copy/paste the commands on this forum made a typo. Here you can see i've >> created date + md5 but still 403 error : >> >> http://prntscr.com/c1690d >> >> On Wed, Aug 3, 2016 at 9:21 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Aug 03, 2016 at 08:38:36PM +0500, shahzaib mushtaq wrote: >>> >>> > Hi, >>> > >>> > We've configured nginx --with-http_secure_link_module to secure the mp4 >>> > links. Currently we're testing it with very basic settings. Following >>> is >>> > brief explanation of our lab : >>> > >>> > A test.mp4 file is located under directory /tunefiles/files/test.mp4 . >>> Our >>> > objective is to access this file over secure link such as >>> > >>> > http://192.168.1.192/files/test.mp4?md5=XXXXXX&expire=2314444 . >>> > >>> > Here is the config of http-secure-link : >>> > >>> > http://pastebin.com/N41WQASj >>> > >>> > We've constructed the md5 & expire using following commands : >>> > >>> > #expiry date is 31st december >>> > $date -d "2016-12-31 23:59" +%s >>> > 1470240179 >>> > >>> > #md5 >>> > $echo -n '1470240179/files/test.mp4 secret' | openssl md5 -binary | >>> openssl >>> > base64 | tr +/ -_ | tr -d = >>> > fY8Iyuqah9coPxTDk-UvVg >>> > >>> > >>> > Once everything is constructed, we loaded the following URL into the >>> > browser & encountered the error 403: >>> > >>> > >>> http://192.168.1.192/files/test.mp4?md5=fY8Iyuqah9coPxTDk-UvVg&expire=1470240179 >>> >>> You are using "expire=" in the request, but "$arg_expires" in the >>> configuration. Note the trailing "s". >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianevans at digitalhit.com Wed Aug 3 23:01:35 2016 From: ianevans at digitalhit.com (Ian Evans) Date: Wed, 3 Aug 2016 19:01:35 -0400 Subject: access log debugging Message-ID: Not sure if it's the heat and I'm tired... Had access logs off for a long time. Decided to start it up again to try and track down a bot issue. Added access_log /var/log/nginx/access.log; to my server config. Restarted. It creates a log file. Notice it has root:root permissions, while the error logs are nginx:adm. Wait. Access log stays at zero bytes. Go to a few pages on site. Zero. I'm missing something simple, right? From mdounin at mdounin.ru Thu Aug 4 00:35:24 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Aug 2016 03:35:24 +0300 Subject: Configuring nginx for both static pages and fcgi simultaneously In-Reply-To: References: <20160731231529.GD57459@mdounin.ru> <20160731235044.GF57459@mdounin.ru> Message-ID: <20160804003524.GV57459@mdounin.ru> Hello! On Wed, Aug 03, 2016 at 07:07:53PM +0200, B.R. wrote: > I disagree: it is a good feature to check for script file existence before > calling PHP on it with something like: > try_files [...] =404; > It helps mitigating attacks by avoiding to pave the way to undue files > being interpreted. > > That only works if the filesystem containing PHP scripts is accessible from > nginx aswell, ofc. While `try_files ... =404` may be usable to mitigate various PHP bugs and misconfigurations (assuming you don't care about efficiency), it's not something that can be used to differentiate static and dynamic content - and that's what the original question was about. Additionally, the original question suggests that it's not about PHP with multiple scripts, but instead a real FastCGI application. Which makes `try_files ... =404` completely wrong. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Aug 4 04:58:39 2016 From: nginx-forum at forum.nginx.org (jwxie) Date: Thu, 04 Aug 2016 00:58:39 -0400 Subject: Whitelist certain query string results in infinite redirect loop Message-ID: Hi. Our login page accepts a query parameter called client_id. Suppose we have three applications and their respective client_ids are: client_id=external-app client_id=internal-app1 client_id=internal-app2 You may guess... behind the scene we do an oauth login that's why "client_id" is in the url. We do have separate nginx servers for handling external users so they won't actually be able to see the internal app at all. What they can do is they can try to brute force login an internal app. Whitelist approach is usually better, espeically given we have more internal applications than external applications. To test my theory, I tried server { listen 0.0.0.0:80; server_name account.example.com; proxy_connect_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; large_client_header_buffers 4 16k; set $backend "some-aws-elb.us-east-1.elb.amazonaws.com"; resolver 10.0.0.2 valid=60s; location / { proxy_pass http://$backend; } location /login { if ($args ~* "client_id=bad-client-id") { rewrite ^(.*)$ $1? redirect; } proxy_pass http://$backend; } } Great. It works. If I replace bad-client-id with "bad-app1", when the user opens "http://login.example.org/login?client_id=bad-app1", the user is redirected back to "http://login.example.org/login" So my next step is to do the negation (which effectively means "if $args does not match this whitelisted client id, redirect), this way the attacker can't quite guess which id is valid or not. But I got a redirect loop. if ($args !~* "client_id=good-client-id") { rewrite ^(.*)$ $1? redirect; } Can someone suggest why I am getting a redirect loop? when I negate (!~*)? This is running on port 80 for the sake of testing. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268758,268758#msg-268758 From francis at daoine.org Thu Aug 4 07:24:06 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Aug 2016 08:24:06 +0100 Subject: Whitelist certain query string results in infinite redirect loop In-Reply-To: References: Message-ID: <20160804072406.GG12280@daoine.org> On Thu, Aug 04, 2016 at 12:58:39AM -0400, jwxie wrote: Hi there, > location /login { > if ($args ~* "client_id=bad-client-id") { > rewrite ^(.*)$ $1? redirect; > } That says: if I ask for /login/something?key=value&client_id=bad-client-id, I get a http redirect to /login/something. Then if I ask for /login/something, I do not match the "if" so I go to proxy_pass. > Great. It works. If I replace bad-client-id with "bad-app1", when the user > opens "http://login.example.org/login?client_id=bad-app1", the user is > redirected back to "http://login.example.org/login" > > So my next step is to do the negation (which effectively means "if $args > does not match this whitelisted client id, redirect), this way the attacker > can't quite guess which id is valid or not. > > But I got a redirect loop. > > if ($args !~* "client_id=good-client-id") { > rewrite ^(.*)$ $1? redirect; > } > > > Can someone suggest why I am getting a redirect loop? when I negate (!~*)? > This is running on port 80 for the sake of testing. That says: if I ask for /login/something?key=value&client_id=bad-client-id, I get a http redirect to /login/something. Then if I ask for /login/something, I match the "if" again so I get a http redirect to /login/something. That's the loop. I would suggest using map (http://nginx.org/r/map) to set a variable based on $arg_client_id; and then test for that variable in the "if". The exact logic will depend on what exactly you want to do, what input you expect, etc. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Aug 4 12:27:53 2016 From: nginx-forum at forum.nginx.org (pixeye) Date: Thu, 04 Aug 2016 08:27:53 -0400 Subject: Create custom variable based on another one Message-ID: <40784b66ae544c62e9870ddcdcc3e63d.NginxMailingListEnglish@forum.nginx.org> Hello there, I'm trying to set a new variable based on another one. I've found this response which is kinda close to what i want to do : https://www.ruby-forum.com/topic/6876231#1176774 map $request_uri $last_path { ~/(?[^/]+)/?$ $pathname; } Except, i want to get the first path level only (and not the file name) ! It seems to work fine even if the syntax seems not really correct, here is what i came up with (and it's working) : map $request_uri $last_path { ~/(?P[^\/]+)/?$ $pathname; } But, as soon as i try to make changes in order to get the first path, my $last_path value is empty... I guess i didn't succed in making all the changes required for it to work... Here is some of my attemps (please don't laugh), not working : ~/^\/(?P[^\/]+)/? $pathname; ~/^\/(?P)[^\/]+/? $pathname; ~/^[\/+](?P[^\/]+)/? $pathname; ~/^[\/+](?P)[^\/]+/? $pathname; To be clear on what i expect : /a for /a/b/c.png /a for /a/c.png / for /c.php Thanks in advance ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268761,268761#msg-268761 From me at myconan.net Thu Aug 4 12:58:04 2016 From: me at myconan.net (Edho Arief) Date: Thu, 04 Aug 2016 21:58:04 +0900 Subject: Create custom variable based on another one In-Reply-To: <40784b66ae544c62e9870ddcdcc3e63d.NginxMailingListEnglish@forum.nginx.org> References: <40784b66ae544c62e9870ddcdcc3e63d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1470315484.3992101.685888329.72C6FB8E@webmail.messagingengine.com> Hi, On Thu, Aug 4, 2016, at 21:27, pixeye wrote: > Hello there, > > I'm trying to set a new variable based on another one. > > I've found this response which is kinda close to what i want to do : > https://www.ruby-forum.com/topic/6876231#1176774 > > map $request_uri $last_path { > ~/(?[^/]+)/?$ $pathname; > } > > Except, i want to get the first path level only (and not the file name) ! > > It seems to work fine even if the syntax seems not really correct, here > is > what i came up with (and it's working) : > > map $request_uri $last_path { > ~/(?P[^\/]+)/?$ $pathname; > } > > But, as soon as i try to make changes in order to get the first path, my > $last_path value is empty... > I guess i didn't succed in making all the changes required for it to > work... > > Here is some of my attemps (please don't laugh), not working : > ~/^\/(?P[^\/]+)/? $pathname; > ~/^\/(?P)[^\/]+/? $pathname; > ~/^[\/+](?P[^\/]+)/? $pathname; > ~/^[\/+](?P)[^\/]+/? $pathname; > `~` already starts the regexp and forward slash doesn't need to be escaped. Also, depends on your definition of "file name": map ... { ~^(/[^/]+)/ $1; default /; } map ... { ~^(/[^/.]+)(/|$) $1; default /; } From nginx-forum at forum.nginx.org Thu Aug 4 13:38:31 2016 From: nginx-forum at forum.nginx.org (pixeye) Date: Thu, 04 Aug 2016 09:38:31 -0400 Subject: Create custom variable based on another one In-Reply-To: <1470315484.3992101.685888329.72C6FB8E@webmail.messagingengine.com> References: <1470315484.3992101.685888329.72C6FB8E@webmail.messagingengine.com> Message-ID: Hai, Thanks for your response ! $1 did not work for me : nginx: [emerg] unknown "1" variable nginx: configuration file /etc/nginx/nginx.conf test failed For future reference, here is what i used : map $request_uri $path { ~^(?P/[^/]+)/ $pathname; default /; } seems to work in all my cases. but, if you want to explain, not sure about what this do : (/|$) Thanks ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268761,268768#msg-268768 From jiobxn at gmail.com Thu Aug 4 15:00:10 2016 From: jiobxn at gmail.com (jiobxn) Date: Thu, 4 Aug 2016 23:00:10 +0800 Subject: Hey guys! nginx reverse proxy facebook \ twitter \ youtube, can not be login Message-ID: Because Internet censorship, I reverse proxy facebook\twitter\youtube, can be accessed?, can not be login?, can not play video?. Like: www.you-tube.com.example.com -> www.youtube.com location / { resolver 8.8.8.8; set $domain example.com; if ($host ~* "^(.*)-(.*).example.com$" ) {set $domains $1$2;} proxy_pass https://$domains; proxy_http_version 1.1; proxy_read_timeout 300; proxy_connect_timeout 300; proxy_redirect off; proxy_set_header Host $proxy_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-By $server_addr:$server_port; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $host; proxy_set_header Accept-Encoding ""; sub_filter_once off; sub_filter sub_filter_types *; sub_filter youtube.com you-tube.com.$domain; sub_filter googlevideo.com goog-levideo.com.$domain; sub_filter $proxy_host $host; } ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 4 15:03:34 2016 From: nginx-forum at forum.nginx.org (lukemroz) Date: Thu, 04 Aug 2016 11:03:34 -0400 Subject: Basic Question: Unable to access https:// with www prefix Message-ID: <7afd813d6ac264f08f51648200beb927.NginxMailingListEnglish@forum.nginx.org> Hello, I followed the instructions at Digital Ocean for setting up a WordPress installation, including enabling HTTPS on the nginx server. (The instructions are here: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04) When accessing https://www.comfortglobalhealth.com, I am always redirected to https://comfortglobalhealth.com. Can someone suggest what change I need to make to my nginx configuration file so that this redirect doesn't happen? Thanks, Luke Here is my config: server { listen 80 default_server; listen [::]:80 default_server; server_name comfortglobalhealth.com www.comfortglobalhealth.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; include snippets/ssl-comfortglobalhealth.com.conf; include snippets/ssl-params.conf; root /var/www/html; index index.php index.html index.htm index.nginx-debian.html; server_name comfortglobalhealth.com www.comfortglobalhealth.com; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; allow all; } location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { expires max; log_not_found off; } location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass 127.0.0.1:9000; } location ~ /\.ht { deny all; } location ~ /.well-known { allow all; } } Here are my snippets: ssl-comfortglobalhealth.com.conf: ssl_certificate /etc/letsencrypt/live/www.comfortglobalhealth.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/www.comfortglobalhealth.com/privkey.pem; ssl-params.conf: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_ecdh_curve secp384r1; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_dhparam /etc/ssl/certs/dhparam.pem; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268775,268775#msg-268775 From zclaudio at bsd.com.br Thu Aug 4 15:40:30 2016 From: zclaudio at bsd.com.br (Ze Claudio Pastore) Date: Thu, 4 Aug 2016 12:40:30 -0300 Subject: Help with: no resolver defined to resolve upstream-name Message-ID: Hello, (Short version: what could cause nginx to try to resolve a name used as the upstream backend configured name? Long version follows in the body of the message). I have a weird problem, which might be something simple I am missing, I searched the archives and read the documentation twice and still I am missing something. In one site / system I have the following simple conf: http { ... upstream app1-backend { sticky name=app1cookie hash=md5 secure httponly; server 192.168.0.4; server 192.168.1.4; } ... } server { ... location / { ... proxy_pass http://app1-backend; ... } ... } And it works perfectly. In a second server I have almost the same conf: http { ... upstream app1-backend { sticky name=app1cookie hash=md5 secure httponly; server 192.168.1.4; server 192.168.0.4; } ... } server { ... location / { ... proxy_pass http://app1-backend; ... } ... } They are two individual nginx reverse proxies in two different data centers. The first nginx/site works great, but on the second I get the following error: 2016/08/03 18:09:56 [error] 94482#100729: *34 no resolver defined to resolve app1-backend, client: 192.168.1.20, server:app1.mydomain.com, request: "GET / HTTP/1.1" So it looks like, for some weird reason, nginx on second site is trying to resolve app1-backend as a host. Proof is, if I add it to /etc/hosts it works. But obviously this is not what I want, the proxy_pass is pointed to an upstream backend not a host. So, first site works fine, second wants to resolve app1-backend like if it was not an upstream block. Both sites should have the very same config, I have compared with diff -u but I still can't figure out what's going wrong here. My guess this is a basic mistake but I can't see. Any obvious diagnostics here? Thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Thu Aug 4 16:43:53 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 4 Aug 2016 18:43:53 +0200 Subject: Basic Question: Unable to access https:// with www prefix In-Reply-To: <7afd813d6ac264f08f51648200beb927.NginxMailingListEnglish@forum.nginx.org> References: <7afd813d6ac264f08f51648200beb927.NginxMailingListEnglish@forum.nginx.org> Message-ID: This is not nginx redirecting, as there is no response body. Most likely it is your wordpress configuration that needs attention. On Thu, Aug 4, 2016 at 5:03 PM, lukemroz wrote: > Hello, > > I followed the instructions at Digital Ocean for setting up a WordPress > installation, including enabling HTTPS on the nginx server. > > (The instructions are here: > https://www.digitalocean.com/community/tutorials/how-to- > secure-nginx-with-let-s-encrypt-on-ubuntu-16-04) > > When accessing https://www.comfortglobalhealth.com, I am always redirected > to https://comfortglobalhealth.com. Can someone suggest what change I > need > to make to my nginx configuration file so that this redirect doesn't > happen? > > Thanks, > Luke > > Here is my config: > > server { > listen 80 default_server; > listen [::]:80 default_server; > server_name comfortglobalhealth.com www.comfortglobalhealth.com; > return 301 https://$server_name$request_uri; > } > > server { > listen 443 ssl http2 default_server; > listen [::]:443 ssl http2 default_server; > include snippets/ssl-comfortglobalhealth.com.conf; > include snippets/ssl-params.conf; > root /var/www/html; > index index.php index.html index.htm index.nginx-debian.html; > server_name comfortglobalhealth.com www.comfortglobalhealth.com; > location = /favicon.ico { log_not_found off; access_log off; } > location = /robots.txt { log_not_found off; access_log off; allow all; > } > location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { > expires max; > log_not_found off; > } > > location / { > try_files $uri $uri/ /index.php$is_args$args; > } > location ~ \.php$ { > include snippets/fastcgi-php.conf; > > fastcgi_pass 127.0.0.1:9000; > } > location ~ /\.ht { > deny all; > } > location ~ /.well-known { > allow all; > } > } > > Here are my snippets: > > ssl-comfortglobalhealth.com.conf: > ssl_certificate > /etc/letsencrypt/live/www.comfortglobalhealth.com/fullchain.pem; > ssl_certificate_key > /etc/letsencrypt/live/www.comfortglobalhealth.com/privkey.pem; > > ssl-params.conf: > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_prefer_server_ciphers on; > ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; > ssl_ecdh_curve secp384r1; > ssl_session_cache shared:SSL:10m; > ssl_session_tickets off; > ssl_stapling on; > ssl_stapling_verify on; > resolver 8.8.8.8 8.8.4.4 valid=300s; > resolver_timeout 5s; > add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; > add_header X-Frame-Options DENY; > add_header X-Content-Type-Options nosniff; > ssl_dhparam /etc/ssl/certs/dhparam.pem; > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,268775,268775#msg-268775 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Aug 4 16:45:48 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 4 Aug 2016 17:45:48 +0100 Subject: Basic Question: Unable to access https:// with www prefix In-Reply-To: <7afd813d6ac264f08f51648200beb927.NginxMailingListEnglish@forum.nginx.org> References: <7afd813d6ac264f08f51648200beb927.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160804164548.GH12280@daoine.org> On Thu, Aug 04, 2016 at 11:03:34AM -0400, lukemroz wrote: Hi there, > When accessing https://www.comfortglobalhealth.com, I am always redirected > to https://comfortglobalhealth.com. Can someone suggest what change I need > to make to my nginx configuration file so that this redirect doesn't > happen? No part of your shown nginx config should do that redirect. Does the same redirect happen if you request a url which is served from the filesystem, and not by WordPress? (Or if you temporarily disable WordPress?) My guess is that you have configured WordPress to do the redirect. In which case - configure it differently. Cheers, f -- Francis Daly francis at daoine.org From ambadiaravind at gmail.com Fri Aug 5 10:34:00 2016 From: ambadiaravind at gmail.com (aRaviNd) Date: Fri, 5 Aug 2016 16:04:00 +0530 Subject: Nginx reverse proxy image not loading Message-ID: Hi All, I am trying to configure Nginx as a reverse proxy for one of the web servers. Web application is working fine but dynamic images are not loading, graphs are configured as img_src with an http url. On the web server I am seeing error "Critical: Malicious cross-site request forgery detected". Nginx is configured with below set headers proxy_set_header HOST $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Could you please help me why images are not loading. - Aravind -------------- next part -------------- An HTML attachment was scrubbed... URL: From idefix at fechner.net Fri Aug 5 15:10:46 2016 From: idefix at fechner.net (Matthias Fechner) Date: Fri, 5 Aug 2016 17:10:46 +0200 Subject: Auth_digest not working In-Reply-To: <20160803123105.GL57459@mdounin.ru> References: <01c22c88-23f9-d38d-5ce2-0bdcb6c89f3d@fechner.net> <20160731225343.GC57459@mdounin.ru> <3e466571-f8a4-343d-da8b-ffdd5c15bcb6@fechner.net> <20160803123105.GL57459@mdounin.ru> Message-ID: Am 03.08.2016 um 14:31 schrieb Maxim Dounin: > To protect passwords in transit consider using SSL/TLS instead. hehe, thanks I expected that answer and I'm using it already. But auth_basic is transferring password which is not nice, no matter if the line is encrypted or not. It would be nice if nginx could support an authentication that does not transfers passwords that are encrypted using a symmetric encryption. >> > Are there plans the get the auth-digest into nginx core? > No. thanks for this clear answer. Gru? Matthias -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning." -- Rich Cook From sven.falempin at gmail.com Sat Aug 6 17:36:41 2016 From: sven.falempin at gmail.com (sven falempin) Date: Sat, 6 Aug 2016 13:36:41 -0400 Subject: PUT files and HTTP::Tiny ( chunked transfert ) Message-ID: Dear Nginx List Readers, I am trying to send files to nginx dav mod with perl. When using simple transfer with one big chunk in the BODY it s ok. But if i want to send the file chunk by chunk, i have issue. I am trying to figure out if the error is on nginx or perl client https://github.com/chansen/p5-http-tiny/issues/92 Best, -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\ From rpaprocki at fearnothingproductions.net Sat Aug 6 19:21:41 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 6 Aug 2016 12:21:41 -0700 Subject: PUT files and HTTP::Tiny ( chunked transfert ) In-Reply-To: References: Message-ID: What version of nginx are you running? This sounds similar to a bug (actually a CVE because it resulted in a segfault) that was patched in 1.10.1 stable branch, and modern 1.11 versions as well. > On Aug 6, 2016, at 10:36, sven falempin wrote: > > Dear Nginx List Readers, > > I am trying to send files to nginx dav mod with perl. > When using simple transfer with one big chunk in the BODY > it s ok. > > But if i want to send the file chunk by chunk, i have issue. > > I am trying to figure out if the error is on nginx or perl client > > https://github.com/chansen/p5-http-tiny/issues/92 > > Best, > > -- > --------------------------------------------------------------------------------------------------------------------- > () ascii ribbon campaign - against html e-mail > /\ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From sven.falempin at gmail.com Sat Aug 6 19:44:43 2016 From: sven.falempin at gmail.com (sven falempin) Date: Sat, 6 Aug 2016 15:44:43 -0400 Subject: PUT files and HTTP::Tiny ( chunked transfert ) In-Reply-To: References: Message-ID: nginx version: nginx/1.9.10 On Sat, Aug 6, 2016 at 3:21 PM, Robert Paprocki wrote: > What version of nginx are you running? This sounds similar to a bug (actually a CVE because it resulted in a segfault) that was patched in 1.10.1 stable branch, and modern 1.11 versions as well. > > >> On Aug 6, 2016, at 10:36, sven falempin wrote: >> >> Dear Nginx List Readers, >> >> I am trying to send files to nginx dav mod with perl. >> When using simple transfer with one big chunk in the BODY >> it s ok. >> >> But if i want to send the file chunk by chunk, i have issue. >> >> I am trying to figure out if the error is on nginx or perl client >> >> https://github.com/chansen/p5-http-tiny/issues/92 >> >> Best, >> >> -- >> --------------------------------------------------------------------------------------------------------------------- >> () ascii ribbon campaign - against html e-mail >> /\ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\ From sven.falempin at gmail.com Sat Aug 6 22:05:48 2016 From: sven.falempin at gmail.com (sven falempin) Date: Sat, 6 Aug 2016 18:05:48 -0400 Subject: PUT files and HTTP::Tiny ( chunked transfert ) In-Reply-To: References: Message-ID: upgraded to more recent version, nginx/1.10.1, with CVE same result :-( On Sat, Aug 6, 2016 at 3:44 PM, sven falempin wrote: > nginx version: nginx/1.9.10 > > On Sat, Aug 6, 2016 at 3:21 PM, Robert Paprocki > wrote: >> What version of nginx are you running? This sounds similar to a bug (actually a CVE because it resulted in a segfault) that was patched in 1.10.1 stable branch, and modern 1.11 versions as well. >> >> >>> On Aug 6, 2016, at 10:36, sven falempin wrote: >>> >>> Dear Nginx List Readers, >>> >>> I am trying to send files to nginx dav mod with perl. >>> When using simple transfer with one big chunk in the BODY >>> it s ok. >>> >>> But if i want to send the file chunk by chunk, i have issue. >>> >>> I am trying to figure out if the error is on nginx or perl client >>> >>> https://github.com/chansen/p5-http-tiny/issues/92 >>> >>> Best, >>> >>> -- >>> --------------------------------------------------------------------------------------------------------------------- >>> () ascii ribbon campaign - against html e-mail >>> /\ >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > --------------------------------------------------------------------------------------------------------------------- > () ascii ribbon campaign - against html e-mail > /\ -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\ From nginx-forum at forum.nginx.org Sun Aug 7 12:43:04 2016 From: nginx-forum at forum.nginx.org (anish10dec) Date: Sun, 07 Aug 2016 08:43:04 -0400 Subject: Nginx Caching Error Response Code like 400 , 500 , 503 ,etc Message-ID: <8af7450ace4c6d72b3d1a0d93cfc1364.NginxMailingListEnglish@forum.nginx.org> Hi Everyone, We are using Nginx as Caching Server . As per Nginx Documentation by default nginx caches 200, 301 & 302 response code but we are observing that if Upstream server gives error 400 or 500 or 503, etc , response is getting cached and all other requests for same file becomes HIT. Though if we set proxy_cache_valid specifying response code ( like proxy_cache_valid 200 15m; ) then also its caching the error response code but its not caching 301 & 302 in that case. Why the same is not getting applied for error response code. Is this the behaviour of Nginx or bug in Nginx ? We are using 1.4.0 version of Nginx Please help so that error response codes should not get cached as this is giving the same error response to users who are requesting for the file though upstream server is healthy and ok to serve the request. Regards, Anish Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268813,268813#msg-268813 From wandenberg at gmail.com Sun Aug 7 13:19:30 2016 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Sun, 7 Aug 2016 10:19:30 -0300 Subject: Nginx Caching Error Response Code like 400 , 500 , 503 ,etc In-Reply-To: <8af7450ace4c6d72b3d1a0d93cfc1364.NginxMailingListEnglish@forum.nginx.org> References: <8af7450ace4c6d72b3d1a0d93cfc1364.NginxMailingListEnglish@forum.nginx.org> Message-ID: Check if your backend server is setting cache headers on errors like Cache-Control / Expires. Nginx by default uses these headers to know if the response should be cached or not. When these headers are not present it uses the configuration done with proxy_cache_valid. On Sun, Aug 7, 2016 at 9:43 AM, anish10dec wrote: > Hi Everyone, > > We are using Nginx as Caching Server . > > As per Nginx Documentation by default nginx caches 200, 301 & 302 response > code but we are observing that if Upstream server gives error 400 or 500 or > 503, etc , response is getting cached and all other requests for same file > becomes HIT. > > Though if we set proxy_cache_valid specifying response code ( like > proxy_cache_valid 200 15m; ) then also its caching the error response code > but its not caching 301 & 302 in that case. Why the same is not getting > applied for error response code. > > Is this the behaviour of Nginx or bug in Nginx ? We are using 1.4.0 version > of Nginx > > Please help so that error response codes should not get cached as this is > giving the same error response to users who are requesting for the file > though upstream server is healthy and ok to serve the request. > > Regards, > Anish > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,268813,268813#msg-268813 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Aug 7 13:27:33 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Aug 2016 16:27:33 +0300 Subject: PUT files and HTTP::Tiny ( chunked transfert ) In-Reply-To: References: Message-ID: <20160807132733.GI57459@mdounin.ru> Hello! On Sat, Aug 06, 2016 at 01:36:41PM -0400, sven falempin wrote: > I am trying to send files to nginx dav mod with perl. > When using simple transfer with one big chunk in the BODY > it s ok. > > But if i want to send the file chunk by chunk, i have issue. > > I am trying to figure out if the error is on nginx or perl client > > https://github.com/chansen/p5-http-tiny/issues/92 Packet trace as shown in the ticket doesn't contain second CRLF after the last chunk: 1470503305.887605 fe:e1:ba:d6:38:45 fe:e1:ba:d0:36:d6 0800 95: 10.32.0.18.2505 > 10.32.0.254.80: P 157:186(29) ack 1 win 2048 (DF) 0000: fee1 bad0 36d6 fee1 bad6 3845 0800 4500 ....6.....8E..E. 0010: 0051 a9a8 4000 4006 7baf 0a20 0012 0a20 .Q.. at .@.{.. ... 0020: 00fe 09c9 0050 2155 27a1 d049 23c4 8018 .....P!U'..I#... 0030: 0800 1a50 0000 0101 080a 8667 d148 73ed ...P.......g.Hs. 0040: bb09 3134 0d0a 6441 5441 4441 5441 5444 ..14..dATADATATD 0050: 4154 4154 4144 4154 4154 0d0a 300d 0a ATATADATAT..0.. Note that last chunk ("0" CRLF) is not followed by anything. On the other hand, it must be followed by trailer (possibly empty) and additional CRLF, httpe://tools.ietf.org/html/rfc2616#section-3.6.1: Chunked-Body = *chunk last-chunk trailer CRLF last-chunk = 1*("0") [ chunk-extension ] CRLF trailer = *(entity-header CRLF) That is, this is clearly a bug in HTTP::Tiny. Should be trivial to fix though. -- Maxim Dounin http://nginx.org/ From sven.falempin at gmail.com Sun Aug 7 13:41:01 2016 From: sven.falempin at gmail.com (sven falempin) Date: Sun, 7 Aug 2016 09:41:01 -0400 Subject: PUT files and HTTP::Tiny ( chunked transfert ) In-Reply-To: <20160807132733.GI57459@mdounin.ru> References: <20160807132733.GI57459@mdounin.ru> Message-ID: On Sun, Aug 7, 2016 at 9:27 AM, Maxim Dounin wrote: > Hello! > > On Sat, Aug 06, 2016 at 01:36:41PM -0400, sven falempin wrote: > >> I am trying to send files to nginx dav mod with perl. >> When using simple transfer with one big chunk in the BODY >> it s ok. >> >> But if i want to send the file chunk by chunk, i have issue. >> >> I am trying to figure out if the error is on nginx or perl client >> >> https://github.com/chansen/p5-http-tiny/issues/92 > > Packet trace as shown in the ticket doesn't contain second CRLF > after the last chunk: > > 1470503305.887605 fe:e1:ba:d6:38:45 fe:e1:ba:d0:36:d6 0800 95: 10.32.0.18.2505 > 10.32.0.254.80: P 157:186(29) ack 1 win 2048 (DF) > 0000: fee1 bad0 36d6 fee1 bad6 3845 0800 4500 ....6.....8E..E. > 0010: 0051 a9a8 4000 4006 7baf 0a20 0012 0a20 .Q.. at .@.{.. ... > 0020: 00fe 09c9 0050 2155 27a1 d049 23c4 8018 .....P!U'..I#... > 0030: 0800 1a50 0000 0101 080a 8667 d148 73ed ...P.......g.Hs. > 0040: bb09 3134 0d0a 6441 5441 4441 5441 5444 ..14..dATADATATD > 0050: 4154 4154 4144 4154 4154 0d0a 300d 0a ATATADATAT..0.. > > Note that last chunk ("0" CRLF) is not followed by anything. > On the other hand, it must be followed by trailer (possibly empty) > and additional CRLF, > httpe://tools.ietf.org/html/rfc2616#section-3.6.1: > > Chunked-Body = *chunk > last-chunk > trailer > CRLF > > last-chunk = 1*("0") [ chunk-extension ] CRLF > > trailer = *(entity-header CRLF) > > That is, this is clearly a bug in HTTP::Tiny. Should be trivial > to fix though. > You re the best , thank you :-) -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\ From vbart at nginx.com Sun Aug 7 20:34:25 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 07 Aug 2016 23:34:25 +0300 Subject: Nginx Caching Error Response Code like 400 , 500 , 503 ,etc In-Reply-To: <8af7450ace4c6d72b3d1a0d93cfc1364.NginxMailingListEnglish@forum.nginx.org> References: <8af7450ace4c6d72b3d1a0d93cfc1364.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1834185.ix5gSIlU6G@vbart-laptop> On Sunday 07 August 2016 08:43:04 anish10dec wrote: > Hi Everyone, > > We are using Nginx as Caching Server . > > As per Nginx Documentation by default nginx caches 200, 301 & 302 response > code but we are observing that if Upstream server gives error 400 or 500 or > 503, etc , response is getting cached and all other requests for same file > becomes HIT. [..] No, the nginx documentation doesn't say this. A quote from the docs: | Parameters of caching can also be set directly in the response header. | This has higher priority than setting of caching time using the directive. The "200, 301 & 302" codes are mentioned in the "proxy_cache_valid" directive when only the "time" parameter is specified. http://nginx.org/r/proxy_cache_valid wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Aug 8 09:50:54 2016 From: nginx-forum at forum.nginx.org (msonntag) Date: Mon, 08 Aug 2016 05:50:54 -0400 Subject: Return proper status codes (404, 302) from client-side Single Page Application Message-ID: <643646ffbf6e8fbbe8722d779f98d25b.NginxMailingListEnglish@forum.nginx.org> Hello, I have the following scenario: - Client: AngularJS-based SPA running on www.example.com - Backend: API running on api.example.com Both live in one nginx instance in two separate "server" environments. - Browsing to www.example.com/items/1 launches the Angular app - App sends request to api.example.com/items/1 - If item 1 does not exist, API returns 404 status code - Client app can now show soft 404 error page, all fine - But for crawlers/search engines, I want to return a proper HTTP status code. Same goes for redirect to item?s canonical URL if that is necessary. So my idea was to do sth like this: - If request URL matches www.example.com/items/, check existence of item by sending a HEAD request to api.example.com/items/ - If request returns 404, return proper status code and error page - If request returns 200, do nothing and just serve the Angular app Is there any way to do this with (plain) nginx and if so?how could it be done specifically? Thanks for any hints :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268827,268827#msg-268827 From reallfqq-nginx at yahoo.fr Mon Aug 8 10:03:24 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 8 Aug 2016 12:03:24 +0200 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: <643646ffbf6e8fbbe8722d779f98d25b.NginxMailingListEnglish@forum.nginx.org> References: <643646ffbf6e8fbbe8722d779f98d25b.NginxMailingListEnglish@forum.nginx.org> Message-ID: I find it strange you oppose HTTP 404 with 'a proper status code': 404 is a 'proper' status code. I find it even stranger you want to lie to search engines crawlers about the existence of your resource. That said, you can craft/modify upstream requests in the proxy module with directives such as: - proxy_method - proxy_set_header - proxy_set_body (basically RTFM, proxy module) When dealing with upstream responses content, you can use variabels such as: - $upstream_status (basically RTFM, upstream module) And for setting answer content conditionally, there is the map module you will find most helpful. --- *B. R.* On Mon, Aug 8, 2016 at 11:50 AM, msonntag wrote: > Hello, > > I have the following scenario: > > - Client: AngularJS-based SPA running on www.example.com > - Backend: API running on api.example.com > > Both live in one nginx instance in two separate "server" environments. > > - Browsing to www.example.com/items/1 launches the Angular app > - App sends request to api.example.com/items/1 > - If item 1 does not exist, API returns 404 status code > - Client app can now show soft 404 error page, all fine > - But for crawlers/search engines, I want to return a proper HTTP status > code. Same goes for redirect to item?s canonical URL if that is necessary. > > So my idea was to do sth like this: > > - If request URL matches www.example.com/items/, check existence of > item > by sending a HEAD request to api.example.com/items/ > - If request returns 404, return proper status code and error page > - If request returns 200, do nothing and just serve the Angular app > > Is there any way to do this with (plain) nginx and if so?how could it be > done specifically? > > Thanks for any hints :) > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,268827,268827#msg-268827 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Aug 8 10:15:29 2016 From: nginx-forum at forum.nginx.org (msonntag) Date: Mon, 08 Aug 2016 06:15:29 -0400 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: References: Message-ID: <6f3556f479c859b05e307f4c0072f90b.NginxMailingListEnglish@forum.nginx.org> Hey B. R., Thanks for your reply! I will have a look at the provided resources. As per your comment: > I find it strange you oppose HTTP 404 with 'a proper status code': 404 is a 'proper' status code. > I find it even stranger you want to lie to search engines crawlers about the existence of your resource. Maybe my wording was just misleading. Of course 404 is a "proper" status code and my posting was all about how to be able to return the appropriate code (404 in case the item does not exist) to the crawler instead of the 200 that it will see if I just deliver the client application and show a "soft" 404 error ? which is my current setup which I want to improve by checking the item's existence prior to delivering the response. Thanks again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268827,268829#msg-268829 From reallfqq-nginx at yahoo.fr Mon Aug 8 14:42:34 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 8 Aug 2016 16:42:34 +0200 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: <6f3556f479c859b05e307f4c0072f90b.NginxMailingListEnglish@forum.nginx.org> References: <6f3556f479c859b05e307f4c0072f90b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sending 404 allows providing body content, and displaying beautiful pages is not restricted to 200. Thus, I do not get the 200 status sent to clients. My suggestion would be sending the same HTTP status code to everyone, choosing the most semantically correct in doing so. We are drifting away from the topic, though. You got some technical answers to your original question, the rest remains out of scope.? --- *B. R.* On Mon, Aug 8, 2016 at 12:15 PM, msonntag wrote: > Hey B. R., > > Thanks for your reply! I will have a look at the provided resources. > > As per your comment: > > > I find it strange you oppose HTTP 404 with 'a proper status code': 404 is > a 'proper' status code. > > I find it even stranger you want to lie to search engines crawlers about > the existence of your resource. > > Maybe my wording was just misleading. Of course 404 is a "proper" status > code and my posting was all about how to be able to return the appropriate > code (404 in case the item does not exist) to the crawler instead of the > 200 > that it will see if I just deliver the client application and show a "soft" > 404 error ? which is my current setup which I want to improve by checking > the item's existence prior to delivering the response. > > Thanks again. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,268827,268829#msg-268829 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Aug 8 15:07:30 2016 From: nginx-forum at forum.nginx.org (msonntag) Date: Mon, 08 Aug 2016 11:07:30 -0400 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: References: Message-ID: <373b9e8c126b90e3a75d2b66e68b8e50.NginxMailingListEnglish@forum.nginx.org> Hey B. R., Thanks for getting back to me. I am pretty sure that I was not able to make my point very clear. The main point is that a client accessing www.example.com/items/1 is simply delivered a HTML file bootstrapping an AngularJS app. The JS client will then make a separate request to api.example.com/items/1 to fetch the data. At this point, if the API returns a 404, there is no way to send a 404 for the initial request, as the response was already sent. So my idea was to make a "precheck" request via nginx to check the item's existence prior to delivering the HTML file containing the JS client. Another way would be to have the server-side application deliver the client bootstrapping file, but that's not the way the architecture is currently set up. Don't know if that makes sense. Looking at the documentation links you provided, I do not really know what you are suggesting as a solution. Maybe these SO questions are able to explain it better?! http://stackoverflow.com/questions/35989950/how-to-notify-googlebot-about-404-pages-in-angular-spa http://stackoverflow.com/questions/37334220/how-do-i-return-a-http-404-status-code-from-a-spa http://stackoverflow.com/questions/14779190/in-a-single-page-app-what-is-the-right-way-to-deal-with-wrong-urls-404-errors Thanks again for your input. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268827,268845#msg-268845 From me at myconan.net Mon Aug 8 15:17:42 2016 From: me at myconan.net (Edho Arief) Date: Tue, 09 Aug 2016 00:17:42 +0900 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: <373b9e8c126b90e3a75d2b66e68b8e50.NginxMailingListEnglish@forum.nginx.org> References: <373b9e8c126b90e3a75d2b66e68b8e50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1470669462.2036047.689242793.2A265A27@webmail.messagingengine.com> Hi, On Tue, Aug 9, 2016, at 00:07, msonntag wrote: > Hey B. R., > > Thanks for getting back to me. I am pretty sure that I was not able to > make > my point very clear. > > The main point is that a client accessing www.example.com/items/1 is > simply > delivered a HTML file bootstrapping an AngularJS app. The JS client will > then make a separate request to api.example.com/items/1 to fetch the > data. If going so far to hit api server anyway, why not proxy all requests (except assets) and serve the index page if 200 and request type is html? nginx then can intercept errors and return correct html error page. From nginx-forum at forum.nginx.org Mon Aug 8 16:59:05 2016 From: nginx-forum at forum.nginx.org (msonntag) Date: Mon, 08 Aug 2016 12:59:05 -0400 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: <1470669462.2036047.689242793.2A265A27@webmail.messagingengine.com> References: <1470669462.2036047.689242793.2A265A27@webmail.messagingengine.com> Message-ID: Hi Edho, > If going so far to hit api server anyway, why not proxy all requests > (except assets) and serve the index page if 200 and request type is > html? nginx then can intercept errors and return correct html error >page. Yes, that would be possible. But ? if I understand you correctly ? my backend would then need to serve the index page itself, and therefore be highly coupled to the frontend. Currently, my backend does not have any knowledge about the frontend whatsoever. Thanks for your input. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268827,268847#msg-268847 From me at myconan.net Mon Aug 8 17:02:39 2016 From: me at myconan.net (Edho Arief) Date: Tue, 09 Aug 2016 02:02:39 +0900 Subject: Return proper status codes (404, 302) from client-side Single Page Application In-Reply-To: References: <1470669462.2036047.689242793.2A265A27@webmail.messagingengine.com> Message-ID: <1470675759.2061717.689360545.079740CB@webmail.messagingengine.com> Hi, On Tue, Aug 9, 2016, at 01:59, msonntag wrote: > Hi Edho, > > > If going so far to hit api server anyway, why not proxy all requests > > (except assets) and serve the index page if 200 and request type is > > html? nginx then can intercept errors and return correct html error > >page. > > Yes, that would be possible. But ? if I understand you correctly ? my > backend would then need to serve the index page itself, and therefore be > highly coupled to the frontend. > > Currently, my backend does not have any knowledge about the frontend > whatsoever. > Well, the backend can for example download and cache frontend's index page... From paul_xie at riversecurity.com Tue Aug 9 05:20:46 2016 From: paul_xie at riversecurity.com (Peng Xie) Date: Tue, 9 Aug 2016 13:20:46 +0800 Subject: can't setup nginx as transparent proxy server Message-ID: <250171D8-F4B0-435D-899B-93EB775E1FDB@riversecurity.com> Hi, I am relatively new to nginx. I would like to setup nginx as a transparent reverse proxy. Here is the topology of my network. ,---- | +------------------------+ | | | | | 192.168.56.109:80 | <-- upstream which is the real http server on port 80 | | | | +------------------------+ | ^ | | | | | +------------------------+ | | | | | 192.168.56.108:800 | <-- proxy_server which run nginx as a reverse proxy server on port 800 | | | | +------------------------+ | ^ | | | | | +------------------------+ | | | | | 192.168.56.1 | <-- client | | | | +------------------------+ `---- Here is my nginx.conf. ,---- | server { | listen 800; | server_name localhost; | | location / { | proxy_pass http://192.168.56.109:80; | proxy_bind $remote_addr transparent; | } `---- If not use proxy_bind, Cient can access upstream through 192.168.56.108:800. Of course, the proxy is not transparent in this situation. To make the proxy_server transparent, I read these documents: doc1) [http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_bind] doc2) [https://www.kernel.org/doc/Documentation/networking/tproxy.tx] Add proxy_bind into nginx.conf according to doc1. Reload nginx: ,---- | nginx -s reload `---- According to doc2, I write a shell-script as follow: ,---- | #!/bin/bash | set -x | sudo iptables -F | sudo iptables -X | | sudo iptables -t mangle -N DIVERT; | sudo iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT; | sudo iptables -t mangle -A DIVERT -j MARK --set-mark 1; | sudo iptables -t mangle -A DIVERT -j ACCEPT; | sudo ip rule add fwmark 1 lookup 100; | sudo ip route add local 0.0.0.0/0 dev lo table 100; | sudo iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 800; `---- Now, I access proxy on client: ,---- | ? ~ curl -v http://192.168.56.108:800 | * Rebuilt URL to: http://192.168.56.108:800/ | * Trying 192.168.56.108... | * Connected to 192.168.56.108 (192.168.56.108) port 800 (#0) | > GET / HTTP/1.1 | > Host: 192.168.56.108:800 | > User-Agent: curl/7.43.0 | > Accept: */* | > `---- And then I try port 80: ,---- | ? ~ curl -v http://192.168.56.108:80 | * Rebuilt URL to: http://192.168.56.108:80/ | * Trying 192.168.56.108... `---- Client can't access the upstream now! Use proxy_bind to set a transparent proxy server may be a new feature on nginx. I've searched for a long time. Does anybody have a suggestion? Thanks Peng Xie From arut at nginx.com Tue Aug 9 06:10:04 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 9 Aug 2016 09:10:04 +0300 Subject: can't setup nginx as transparent proxy server In-Reply-To: <250171D8-F4B0-435D-899B-93EB775E1FDB@riversecurity.com> References: <250171D8-F4B0-435D-899B-93EB775E1FDB@riversecurity.com> Message-ID: <20160809061004.GH2194@Romans-MacBook-Air.local> Hi, On Tue, Aug 09, 2016 at 01:20:46PM +0800, Peng Xie wrote: > Hi, > > I am relatively new to nginx. I would like to setup nginx as a > transparent reverse proxy. > > Here is the topology of my network. > ,---- > | +------------------------+ > | | | > | | 192.168.56.109:80 | <-- upstream which is the real http server on port 80 > | | | > | +------------------------+ > | ^ > | | > | | > | +------------------------+ > | | | > | | 192.168.56.108:800 | <-- proxy_server which run nginx as a reverse proxy server on port 800 > | | | > | +------------------------+ > | ^ > | | > | | > | +------------------------+ > | | | > | | 192.168.56.1 | <-- client > | | | > | +------------------------+ > `---- > > Here is my nginx.conf. > ,---- > | server { > | listen 800; > | server_name localhost; > | > | location / { > | proxy_pass http://192.168.56.109:80; > | proxy_bind $remote_addr transparent; > | } > `---- > > If not use proxy_bind, Cient can access upstream through > 192.168.56.108:800. Of course, the proxy is not transparent in this > situation. > > To make the proxy_server transparent, I read these documents: doc1) > [http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_bind] > > doc2) [https://www.kernel.org/doc/Documentation/networking/tproxy.tx] > > Add proxy_bind into nginx.conf according to doc1. Reload nginx: > ,---- > | nginx -s reload > `---- > > According to doc2, I write a shell-script as follow: > ,---- > | #!/bin/bash > | set -x > | sudo iptables -F > | sudo iptables -X > | > | sudo iptables -t mangle -N DIVERT; > | sudo iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT; > | sudo iptables -t mangle -A DIVERT -j MARK --set-mark 1; > | sudo iptables -t mangle -A DIVERT -j ACCEPT; > | sudo ip rule add fwmark 1 lookup 100; > | sudo ip route add local 0.0.0.0/0 dev lo table 100; > | sudo iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 800; > `---- > > Now, I access proxy on client: > ,---- > | ? ~ curl -v http://192.168.56.108:800 > | * Rebuilt URL to: http://192.168.56.108:800/ > | * Trying 192.168.56.108... > | * Connected to 192.168.56.108 (192.168.56.108) port 800 (#0) > | > GET / HTTP/1.1 > | > Host: 192.168.56.108:800 > | > User-Agent: curl/7.43.0 > | > Accept: */* > | > > `---- > > And then I try port 80: > ,---- > | ? ~ curl -v http://192.168.56.108:80 > | * Rebuilt URL to: http://192.168.56.108:80/ > | * Trying 192.168.56.108... > `---- > > Client can't access the upstream now! > > Use proxy_bind to set a transparent proxy server may be a new feature on > nginx. I've searched for a long time. Does anybody have a suggestion? > > Thanks Peng Xie Did you try to tcpdump the packets at proxy and upstream? -- Roman Arutyunyan From nginx-forum at forum.nginx.org Tue Aug 9 08:14:39 2016 From: nginx-forum at forum.nginx.org (sphax3d) Date: Tue, 09 Aug 2016 04:14:39 -0400 Subject: Gzip issue with Safari In-Reply-To: <20160518170425.GI3477@daoine.org> References: <20160518170425.GI3477@daoine.org> Message-ID: Hi, I discover the problem reported by mcofko yesterday when I see the nginx configuration generated by the W3 Total Cache extension for WordPress : if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } It doesn?t use the gzip_static feature of nginx ?because gz is not compatible with some versions of safari.? https://wordpress.org/support/topic/plugin-w3-total-cache-change-gzip-extension-to-gz This bug of Safari seems to be well known since 2009 ! : - https://www.webveteran.com/blog/web-coding/coldfusion/fix-for-safari-and-gzip-compressed-javascripts/ - http://stackoverflow.com/questions/32169082/safari-cannot-decode-raw-data-when-using-gzip - https://blog.jcoglan.com/2007/05/02/compress-javascript-and-css-without-touching-your-application-code/ But somebody noticed the bug is resolved now : ?The test page blog.kosny.com/testpages/safari-gz indicates that the warning "Be careful naming and test in Safari. Because safari won't handle css.gz or js.gz" is out of date. In Safari 7 on Mavericks, and in Safari on iOS 7, both css.gz and js.gz work. I don't know when this change occurred, I'm only testing with the devices I have.? http://stackoverflow.com/questions/5442011/serving-gzipped-css-and-javascript-from-amazon-cloudfront-via-s3 So now I don?t know if I must handle this bug or if I can consider this out-of-date. Cheers Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266915,268856#msg-268856 From nginx-forum at forum.nginx.org Tue Aug 9 13:27:06 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 09 Aug 2016 09:27:06 -0400 Subject: Third party module appears to be not loaded Message-ID: <511d22b7e3e2dd21e2806cf1489641b5.NginxMailingListEnglish@forum.nginx.org> Hi , I compile nginx with nginx upload progress module but i still can't use any of the configuration variables.Nginx throws errors like [emerg] unknown directive "track_uploads" nginx version: nginx/1.11.3 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with OpenSSL 1.0.2h 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-openssl=/root/openssl-1.0.2h --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_random_index_module --with-http_gzip_static_module --with-http_stub_status_module --add-module=../nginx-upload-module --add-module=../nginx-upload-progress-module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268865,268865#msg-268865 From nginx-forum at forum.nginx.org Tue Aug 9 13:28:47 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 09 Aug 2016 09:28:47 -0400 Subject: Third party module appears to be not loaded In-Reply-To: <511d22b7e3e2dd21e2806cf1489641b5.NginxMailingListEnglish@forum.nginx.org> References: <511d22b7e3e2dd21e2806cf1489641b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0613e4aef899c22b85ab0e122922622c.NginxMailingListEnglish@forum.nginx.org> I even took latest version just to be sure git clone -b master https://github.com/masterzen/nginx-upload-progress-module/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268865,268866#msg-268866 From francis at daoine.org Tue Aug 9 14:13:08 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Aug 2016 15:13:08 +0100 Subject: Third party module appears to be not loaded In-Reply-To: <511d22b7e3e2dd21e2806cf1489641b5.NginxMailingListEnglish@forum.nginx.org> References: <511d22b7e3e2dd21e2806cf1489641b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160809141308.GJ12280@daoine.org> On Tue, Aug 09, 2016 at 09:27:06AM -0400, khav wrote: Hi there, > I compile nginx with nginx upload progress module but i still can't use any > of the configuration variables.Nginx throws errors like > > [emerg] unknown directive "track_uploads" That message usually means either that the nginx that is being used does not include the module that is expected; or that one of the included modules is not compatible with the version of nginx that is being used. > --add-module=../nginx-upload-module > --add-module=../nginx-upload-progress-module My guess is the latter. What version of nginx are your versions of those two modules documented to work with? (If they are not documented, then possibly you are the first person to try this particular combination.) f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Aug 9 14:29:31 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 9 Aug 2016 15:29:31 +0100 Subject: can't setup nginx as transparent proxy server In-Reply-To: <250171D8-F4B0-435D-899B-93EB775E1FDB@riversecurity.com> References: <250171D8-F4B0-435D-899B-93EB775E1FDB@riversecurity.com> Message-ID: <20160809142931.GK12280@daoine.org> On Tue, Aug 09, 2016 at 01:20:46PM +0800, Peng Xie wrote: Hi there, > I am relatively new to nginx. I would like to setup nginx as a > transparent reverse proxy. What, specifically, do you mean by "transparent", here? I think that the nginx proxy_bind config is intended so that the upstream server is fooled into thinking that it is talking to the original client, instead of to nginx. (And to achieve that, you need that outside-of-nginx networking is set up to get the packets to the right places.) It is not clear to me that your idea of "transparent" is the same as that. > doc2) [https://www.kernel.org/doc/Documentation/networking/tproxy.tx] > According to doc2, I write a shell-script as follow: > ,---- > | #!/bin/bash > | set -x > | sudo iptables -F > | sudo iptables -X > | > | sudo iptables -t mangle -N DIVERT; > | sudo iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT; > | sudo iptables -t mangle -A DIVERT -j MARK --set-mark 1; > | sudo iptables -t mangle -A DIVERT -j ACCEPT; > | sudo ip rule add fwmark 1 lookup 100; > | sudo ip route add local 0.0.0.0/0 dev lo table 100; > | sudo iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 800; > `---- This does not look to me like it will do what you want. >From the nginx documentation: """ In order for this parameter to work, it is necessary to run nginx worker processes with the superuser privileges and configure kernel routing table to intercept network traffic from the proxied server. """ That does not appear to be intercepting the network traffic from the proxied server. (And your nginx.conf snippet did not appear to show things running with the superuser privileges.) > Use proxy_bind to set a transparent proxy server may be a new feature on > nginx. I've searched for a long time. Does anybody have a suggestion? There is "client", "nginx", and "upstream". They all have their own IP addresses (and ports). Can you describe your intended connection, from which machine to which machine using which address and port? That might make it clear whether what you want is doable. f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue Aug 9 14:46:12 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 09 Aug 2016 17:46:12 +0300 Subject: Gzip issue with Safari In-Reply-To: References: <20160518170425.GI3477@daoine.org> Message-ID: <6370274.yHOSiaA1Hh@vbart-workstation> On Tuesday 09 August 2016 04:14:39 sphax3d wrote: > Hi, > > I discover the problem reported by mcofko yesterday when I see the nginx > configuration generated by the W3 Total Cache extension for WordPress : > > if ($http_accept_encoding ~ gzip) { > set $w3tc_enc .gzip; > } > if (-f $request_filename$w3tc_enc) { > rewrite (.*) $1$w3tc_enc break; > } > > It doesn?t use the gzip_static feature of nginx ?because gz is not > compatible with some versions of safari.? > https://wordpress.org/support/topic/plugin-w3-total-cache-change-gzip-extension-to-gz [..] That doesn't make any sense. The gzip static module doesn't need ".gz" in the requests, it just checks files on the disk and serve them as the response like they were compressed on the fly. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Aug 9 14:47:58 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 09 Aug 2016 10:47:58 -0400 Subject: Third party module appears to be not loaded In-Reply-To: <20160809141308.GJ12280@daoine.org> References: <20160809141308.GJ12280@daoine.org> Message-ID: The nginx-upload-module works well so i guess the issue is with nginx-upload-progress-module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268865,268870#msg-268870 From larry.martell at gmail.com Tue Aug 9 21:09:59 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 9 Aug 2016 17:09:59 -0400 Subject: debugging 504 Gateway Time-out Message-ID: I just set up a django site with nginx and uWSGI. Some pages I go to work fine, but other fail with a 504 Gateway Time-out. I used to serve this site with apache and wsgi and these same pages worked fine. This is what I see in the nginx error log: 2016/08/09 16:40:19 [error] 17345#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.250.147.59, server: localhost, request: "GET /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report HTTP/1.1", upstream: "uwsgi://unix:///usr/local/motor/motor.sock", host: "xx.xx.xx.xx", referrer: "http://xx.xx.xx.xx/report/CDSEM/MeasurementData/" When this happens I see this in the uwsgi error log: Tue Aug 9 16:42:57 2016 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during GET /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report (10.250.147.59) IOError: write error [pid: 9230|app: 0|req: 36/155] 10.250.147.59 () {46 vars in 1333 bytes} [Tue Aug 9 16:32:16 2016] GET /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report => generated 0 bytes in 640738 msecs (HTTP/1.1 200) 4 headers in 0 bytes (1 switches on core 0) Note the weird timestamps. The first uwsgi message is more then 2 minutes after the nginx message. And the second uwsgi message has a timestamp before the previous uwsgi message. What's up with that?? Here is my nginx config: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; sendfile on; # set client body size to 20M client_max_body_size 20M; include /etc/nginx/sites-enabled/*; } and here is my local site file: # motor_nginx.conf # the upstream component nginx needs to connect to upstream django { server unix:///usr/local/motor/motor.sock; # for a file socket } # configuration of the server server { # the port your site will be served on listen 80; # the domain name it will serve for server_name localhost; charset utf-8; # max upload size client_max_body_size 75M; # adjust to taste proxy_read_timeout 600; proxy_connect_timeout 600; proxy_send_timeout 600; send_timeout 600; # Django media location /media { alias /usr/local/motor/motor/media; } location /static { alias /usr/local/motor/motor/static; } # Finally, send all non-media requests to the Django server. location / { uwsgi_pass django; include /usr/local/motor/motor/uwsgi_params; } } How can I debug or fix this? Thanks! From r1ch+nginx at teamliquid.net Wed Aug 10 02:35:58 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 10 Aug 2016 04:35:58 +0200 Subject: debugging 504 Gateway Time-out In-Reply-To: References: Message-ID: > generated 0 bytes in 640738 msecs I would look into what is causing your backend to take over 10 minutes to respond to that request. On Tue, Aug 9, 2016 at 11:09 PM, Larry Martell wrote: > I just set up a django site with nginx and uWSGI. Some pages I go to > work fine, but other fail with a 504 Gateway Time-out. I used to > serve this site with apache and wsgi and these same pages worked fine. > > This is what I see in the nginx error log: > > 2016/08/09 16:40:19 [error] 17345#0: *1 upstream timed out (110: > Connection timed out) while reading response header from upstream, > client: 10.250.147.59, server: localhost, request: "GET > /report/CDSEM/MeasurementData/?group=&target_name=&recipe=& > ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_ > 1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot& > field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report > HTTP/1.1", upstream: "uwsgi://unix:///usr/local/motor/motor.sock", > host: "xx.xx.xx.xx", referrer: > "http://xx.xx.xx.xx/report/CDSEM/MeasurementData/" > > When this happens I see this in the uwsgi error log: > > Tue Aug 9 16:42:57 2016 - > uwsgi_response_writev_headers_and_body_do(): Broken pipe > [core/writer.c line 296] during GET > /report/CDSEM/MeasurementData/?group=&target_name=&recipe=& > ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_ > 1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot& > field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report > (10.250.147.59) > IOError: write error > [pid: 9230|app: 0|req: 36/155] 10.250.147.59 () {46 vars in 1333 > bytes} [Tue Aug 9 16:32:16 2016] GET > /report/CDSEM/MeasurementData/?group=&target_name=&recipe=& > ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_ > 1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot& > field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report > => generated 0 bytes in 640738 msecs (HTTP/1.1 200) 4 headers in 0 > bytes (1 switches on core 0) > > Note the weird timestamps. The first uwsgi message is more then 2 > minutes after the nginx message. And the second uwsgi message has a > timestamp before the previous uwsgi message. What's up with that?? > > Here is my nginx config: > > worker_processes 1; > > events { > worker_connections 1024; > } > > http { > include mime.types; > default_type application/octet-stream; > keepalive_timeout 65; > sendfile on; > > # set client body size to 20M > client_max_body_size 20M; > > include /etc/nginx/sites-enabled/*; > } > > > and here is my local site file: > > # motor_nginx.conf > > # the upstream component nginx needs to connect to > upstream django { > server unix:///usr/local/motor/motor.sock; # for a file socket > } > > # configuration of the server > server { > # the port your site will be served on > listen 80; > # the domain name it will serve for > server_name localhost; > charset utf-8; > > # max upload size > client_max_body_size 75M; # adjust to taste > > proxy_read_timeout 600; > proxy_connect_timeout 600; > proxy_send_timeout 600; > send_timeout 600; > > # Django media > location /media { > alias /usr/local/motor/motor/media; > } > > location /static { > alias /usr/local/motor/motor/static; > } > > # Finally, send all non-media requests to the Django server. > location / { > uwsgi_pass django; > include /usr/local/motor/motor/uwsgi_params; > } > } > > How can I debug or fix this? > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Wed Aug 10 03:07:51 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Wed, 10 Aug 2016 10:07:51 +0700 Subject: map directive using $default as origin Message-ID: <5da51978-8d89-1cbb-4513-d4912f5bf699@xtremenitro.org> Hello! I am using module small_light (https://github.com/cubicdaiya/ngx_small_light), since the module can't detect which browser can process webp transformation, I creating a simple directive on nginx to detect chrome and opera only and fallback the rest to jpeg/jpg. But, if the origin is not jpg (eg: png or gif), the directive transform it into jpg. The snippet looks like : map $http_user_agent $img_mode { $default jpg; ~*chrome webp; ~*opera webp; } ... snip ... location ~ \.(jpe?g|png|gif|webp)$ { root /usr/share/nginx/html; small_light on; small_light_getparam_mode on; rewrite (?.*) $capt?$args&of=$img_mode break; } ... snip ... Is it possible to give the $default value came from the origin file(s)? So the nginx didn't process png/gif into jpg if using firefox. And will process anything into webp if using Opera/Google Chrome. Many helps are appreciated. Thank you :) From steve at greengecko.co.nz Wed Aug 10 03:47:39 2016 From: steve at greengecko.co.nz (steve) Date: Wed, 10 Aug 2016 15:47:39 +1200 Subject: map directive using $default as origin In-Reply-To: <5da51978-8d89-1cbb-4513-d4912f5bf699@xtremenitro.org> References: <5da51978-8d89-1cbb-4513-d4912f5bf699@xtremenitro.org> Message-ID: <57AAA3DB.5090204@greengecko.co.nz> Hi! On 08/10/2016 03:07 PM, Dewangga Bachrul Alam wrote: > Hello! > > I am using module small_light > (https://github.com/cubicdaiya/ngx_small_light), since the module can't > detect which browser can process webp transformation, I creating a > simple directive on nginx to detect chrome and opera only and fallback > the rest to jpeg/jpg. > > But, if the origin is not jpg (eg: png or gif), the directive transform > it into jpg. > > The snippet looks like : > > map $http_user_agent $img_mode { > $default jpg; > ~*chrome webp; > ~*opera webp; > } > > ... snip ... > location ~ \.(jpe?g|png|gif|webp)$ { > root /usr/share/nginx/html; > small_light on; > small_light_getparam_mode on; > rewrite (?.*) $capt?$args&of=$img_mode break; > } > ... snip ... > > Is it possible to give the $default value came from the origin file(s)? > So the nginx didn't process png/gif into jpg if using firefox. And will > process anything into webp if using Opera/Google Chrome. > > Many helps are appreciated. > Thank you :) > try using default instead of $default -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From dewanggaba at xtremenitro.org Wed Aug 10 05:01:23 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Wed, 10 Aug 2016 12:01:23 +0700 Subject: map directive using $default as origin In-Reply-To: <57AAA3DB.5090204@greengecko.co.nz> References: <5da51978-8d89-1cbb-4513-d4912f5bf699@xtremenitro.org> <57AAA3DB.5090204@greengecko.co.nz> Message-ID: <00b69482-ee9a-6108-d4d1-5c1b0f76f0f5@xtremenitro.org> Hello steve! On 08/10/2016 10:47 AM, steve wrote: > Hi! > > On 08/10/2016 03:07 PM, Dewangga Bachrul Alam wrote: >> Hello! >> >> I am using module small_light >> (https://github.com/cubicdaiya/ngx_small_light), since the module can't >> detect which browser can process webp transformation, I creating a >> simple directive on nginx to detect chrome and opera only and fallback >> the rest to jpeg/jpg. >> >> But, if the origin is not jpg (eg: png or gif), the directive transform >> it into jpg. >> >> The snippet looks like : >> >> map $http_user_agent $img_mode { >> $default jpg; >> ~*chrome webp; >> ~*opera webp; >> } >> >> ... snip ... >> location ~ \.(jpe?g|png|gif|webp)$ { >> root /usr/share/nginx/html; >> small_light on; >> small_light_getparam_mode on; >> rewrite (?.*) $capt?$args&of=$img_mode break; >> } >> ... snip ... >> >> Is it possible to give the $default value came from the origin file(s)? >> So the nginx didn't process png/gif into jpg if using firefox. And will >> process anything into webp if using Opera/Google Chrome. >> >> Many helps are appreciated. >> Thank you :) >> > try using default instead of $default > Was changed to `$default default`, and it's works. Thanks in a bunch. $ curl -I http://localhost/kartun.png HTTP/1.1 200 OK Server: Unyil Date: Wed, 10 Aug 2016 04:37:35 GMT Content-Type: image/png Content-Length: 494553 Last-Modified: Tue, 09 Aug 2016 11:43:38 GMT Connection: keep-alive Vary: Accept-Encoding ETag: "57a9c1ea-76ffe" From shahzaib.cb at gmail.com Wed Aug 10 07:07:58 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 10 Aug 2016 12:07:58 +0500 Subject: NGINX http-secure-link iphone issue !! Message-ID: Hi, We've depolyed NGINX ngx_*http*_*secure*_*link*_module in our website based on php programming & its working well. Player is providing correct hash+expiry to serve links. Though we're facing problem authenticating md5 from iphone mobile which is generating md5 based on C objective language & looks like this hash is somewhat different & have authenticating issue against NGINX md5. Is there any way of fixing it ? Short conclusion : Web APP == good Mobile APP == bad Please if anyone guide us, would be really helpful. Regards Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Aug 10 07:24:04 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Aug 2016 08:24:04 +0100 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: References: Message-ID: <20160810072404.GL12280@daoine.org> On Wed, Aug 10, 2016 at 12:07:58PM +0500, shahzaib mushtaq wrote: Hi there, > We've depolyed NGINX ngx_*http*_*secure*_*link*_module in our website based > on php programming & its working well. That much makes sense. > Player is providing correct hash+expiry to serve links. I'm not sure about that bit -- what is the player? (It may not matter; I presume it is "something on your back-end that is doing things right".) > Though we're facing problem authenticating md5 from iphone mobile which is > generating md5 based on C objective language & looks like this hash is > somewhat different & have authenticating issue against NGINX md5. But that part confuses me. Why does the client have anything to do with md5 and generating things? The usual model is that something on the server creates the "secure" url, and gives it to the client. The client then requests that url; the server checks that it is valid, and the server issues the content. Does your system use a different model? > Is there any way of fixing it ? Examine your design. Examine your implementation. See where it does not do what you expect. Change that piece. I don't think that there are enough details provided yet to give a more specific answer. > Short conclusion : > > Web APP == good > Mobile APP == bad What do the APPs do, other than request urls that they have been given? Good luck with it, f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Wed Aug 10 08:01:33 2016 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 10 Aug 2016 13:01:33 +0500 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: <20160810072404.GL12280@daoine.org> References: <20160810072404.GL12280@daoine.org> Message-ID: Hi, > Why does the client have anything to do with md5 and generating things? The usual model is that something on the server creates the "secure" url, and gives it to the client. The client then requests that url; the server checks that it is valid, and the server issues the content. Does your system use a different model? Well, sorry i couldn't explain it in best manners. Well our website has three platforms (Iphone application, Android Application, Web application) . For website what we're doing is that : User clicks on video -> move to watch video page -> a function creates md5+expiry on this page -> Secure URL appends into the player -> Video starts to play. Seems like you're right our approach is wrong for iphone application , we're trying to generate hash in mobile application too which was not right. Now we're taking approach where URL will construct on server & distribute to all platforms. Is that how it should be ? Thanks. Shahzaib On Wed, Aug 10, 2016 at 12:24 PM, Francis Daly wrote: > On Wed, Aug 10, 2016 at 12:07:58PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > We've depolyed NGINX ngx_*http*_*secure*_*link*_module in our website > based > > on php programming & its working well. > > That much makes sense. > > > Player is providing correct hash+expiry to serve links. > > I'm not sure about that bit -- what is the player? (It may not matter; > I presume it is "something on your back-end that is doing things right".) > > > Though we're facing problem authenticating md5 from iphone mobile which > is > > generating md5 based on C objective language & looks like this hash is > > somewhat different & have authenticating issue against NGINX md5. > > But that part confuses me. > > Why does the client have anything to do with md5 and generating things? > > The usual model is that something on the server creates the "secure" > url, and gives it to the client. The client then requests that url; > the server checks that it is valid, and the server issues the content. > > Does your system use a different model? > > > Is there any way of fixing it ? > > Examine your design. Examine your implementation. See where it does not > do what you expect. Change that piece. > > I don't think that there are enough details provided yet to give a more > specific answer. > > > Short conclusion : > > > > Web APP == good > > Mobile APP == bad > > What do the APPs do, other than request urls that they have been given? > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Aug 10 12:32:34 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Aug 2016 13:32:34 +0100 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: References: <20160810072404.GL12280@daoine.org> Message-ID: <20160810123234.GM12280@daoine.org> On Wed, Aug 10, 2016 at 01:01:33PM +0500, shahzaib mushtaq wrote: Hi there, > > Why does the client have anything to do with md5 and generating things? > User clicks on video -> move to watch video page -> a function creates > md5+expiry on this page -> Secure URL appends into the player -> Video > starts to play. I think I'm still a bit unclear on why the "secure" link is used here at all. If the link is created by the client, then it doesn't really count as "secure", does it? Oh, I guess that if "the client" is your own custom code rather than (say) a piece of javascript that is offered to any browser, that might be a good reason for using that design. > Seems like you're right our approach is wrong for iphone application , > we're trying to generate hash in mobile application too which was not > right. Now we're taking approach where URL will construct on server & > distribute to all platforms. > > Is that how it should be ? Oh, it *can* be anything that you want. The design depends on what the requirements are -- do you use the "secure link" just for a time-expiry (instead of just removing the video from the server); or for some other control like "must come from a particular IP address" or "must also include a particular cookie". It could well be that your current design is correct for your requirements, and the problem is in whatever the iphone application is doing. The only nginx-related piece is to ensure that it correctly reads-and-interprets the secure part of the url, and for that you need to make sure that whatever creates the url uses the expected method to create it. Cheers, f -- Francis Daly francis at daoine.org From r at roze.lv Wed Aug 10 12:33:55 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 10 Aug 2016 15:33:55 +0300 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: References: <20160810072404.GL12280@daoine.org> Message-ID: <07A20CE0DB894C2287C686975F3941C1@MasterPC> > Seems like you're right our approach is wrong for iphone application , > we're trying to generate hash in mobile application too which was not > right. Now we're taking approach where URL will construct on server & > distribute to all platforms. > > Is that how it should be ? By generating the hash on the client application yout deliver the app with the md5 salt which might be discovered by decompiling your app (depending on the case it may or may not be a problem). But for your initial question - without actually showing the hash implementation on the iphone app (the Object C code which generates the hash (if you use Common Crypto lib or something else)) people on mailing list can only guess what is wrong. Check if you construct the string in right order eg the prehashed url is the same as on php side for example. rr From nginx-forum at forum.nginx.org Wed Aug 10 13:59:41 2016 From: nginx-forum at forum.nginx.org (Sheik) Date: Wed, 10 Aug 2016 09:59:41 -0400 Subject: Nginx and CRL Message-ID: <7a1e5cd06979e3340bc1542aadeb0a98.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to nginx and I am trying to get CRL checking to work. I was wondering if there is a way to automatically update the CRL list (as specified by ssl_crl). Can Nginx automatically update this file periodically at configurable intervals? If not, how is this file typically updated? Via a bash script? Does the CRL list file have a file size limit ? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268898,268898#msg-268898 From zxcvbn4038 at gmail.com Wed Aug 10 16:30:34 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 10 Aug 2016 12:30:34 -0400 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: References: Message-ID: md5 shouldn't give different results regardless of implementation - my guess is that your different platforms are using different character encodings (iso8859 vs utf8 for instance) and that is the source of your differences. To verify your md5 implementation there are test vectors here https://www.cs.bris.ac.uk/Research/CryptographySecurity/MPC/md5-test.txt, here http://www.febooti.com/products/filetweak/members/hash-and-crc/test-vectors/, and at the end if the article here https://en.wikipedia.org/wiki/MD5 that you can use to verify your implementations produce same output for same input - just make sure the input is really the same! I hope this helps a bit. On Wed, Aug 10, 2016 at 3:07 AM, shahzaib mushtaq wrote: > Hi, > > We've depolyed NGINX ngx_*http*_*secure*_*link*_module in our website > based on php programming & its working well. Player is providing correct > hash+expiry to serve links. > > Though we're facing problem authenticating md5 from iphone mobile which is > generating md5 based on C objective language & looks like this hash is > somewhat different & have authenticating issue against NGINX md5. Is there > any way of fixing it ? > > Short conclusion : > > Web APP == good > Mobile APP == bad > > Please if anyone guide us, would be really helpful. > > Regards > Shahzaib > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Thu Aug 11 01:04:13 2016 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 11 Aug 2016 02:04:13 +0100 Subject: NGINX http-secure-link iphone issue !! In-Reply-To: References: Message-ID: <0656a16d-8444-a8f5-e75d-36b8f1265467@swsystem.co.uk> My initial thoughts here are that you're potentially putting private information in the public hands. iirc to use http_secure_link you need some "private" information to generate the md5sum. This data should not be part of a mobile application. Personally I'd look at a way to get the full url from something only you have access to, even if it's the basic of asp/php pages to prevent you putting the secure part of the md5 generation into public hands where anything can happen. What would happen if you decided to change this private date and people/customers didn't want to update their applications or didn't understand the impact of not doing the update right now? On 10/08/2016 08:07, shahzaib mushtaq wrote: > Hi, > > We've depolyed NGINX ngx_*http*_*secure*_*link*_module in our website > based on php programming & its working well. Player is providing correct > hash+expiry to serve links. > > Though we're facing problem authenticating md5 from iphone mobile which > is generating md5 based on C objective language & looks like this hash > is somewhat different & have authenticating issue against NGINX md5. Is > there any way of fixing it ? > > Short conclusion : > > Web APP == good > Mobile APP == bad > > Please if anyone guide us, would be really helpful. > > Regards > Shahzaib > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From shilei at qiyi.com Thu Aug 11 07:04:49 2016 From: shilei at qiyi.com (=?gb2312?B?yq/A2g==?=) Date: Thu, 11 Aug 2016 07:04:49 +0000 Subject: "ngx.var.remote_addr" have different value when retrive it twice in lua Message-ID: <4e49ee27cc9d4e9cbbdbf82a2bab132e@EXCH07.iqiyi.pps> Hi, I am using nginx 1.4, and I did fetch the "ngx.var.remote_addr" more than once in lua script, but strange thing is sometimes I will get different ?$remote_addr? in different line in the same lua script file. Some more information: 1. I hook the lua script at rewrite_by_lua_file stage 2. This happens when the client has more than one ip address with different Operators. 3. The ?remote_addr? I got are the ip addresses the client have from different Operators. I want to know how this happen, it is by design that the ?$remote_addr? could be changed for a http request? Thanks! ? ? ????????????? [????logo] ????? ??????????????2???????17? ???100080 ????86 138 1180 3496 ??? ????86 10 6267 7000 ???shilei at qiyi.com ???www.iQIYI.com www.ppstream.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 25521 bytes Desc: image001.jpg URL: From detailyang at gmail.com Thu Aug 11 07:13:53 2016 From: detailyang at gmail.com (=?UTF-8?B?5p2o56eJ5q2m?=) Date: Thu, 11 Aug 2016 15:13:53 +0800 Subject: "ngx.var.remote_addr" have different value when retrive it twice in lua In-Reply-To: <4e49ee27cc9d4e9cbbdbf82a2bab132e@EXCH07.iqiyi.pps> References: <4e49ee27cc9d4e9cbbdbf82a2bab132e@EXCH07.iqiyi.pps> Message-ID: I guess that the question is about the lua variable scope rather than nginx. you should give the simplest lua script to reproduce the question in the mail. About lua variable scope, you can take a look at this writen by agentzh:) 2016-08-11 15:04 GMT+08:00 ?? : > Hi, > > > > I am using nginx 1.4, and I did fetch the "ngx.var.remote_addr" more than > once in lua script, but strange thing is sometimes I will get different > ?$remote_addr? in different line in the same lua script file. > > Some more information: > > 1. I hook the lua script at rewrite_by_lua_file stage > > 2. This happens when the client has more than one ip address with > different Operators. > > 3. The ?remote_addr? I got are the ip addresses the client have > from different Operators. > > > > I want to know how this happen, it is by design that the ?$remote_addr? > could be changed for a http request? > > > > Thanks! > > > > *? ?* > > ????????????? > > > > [image: ????logo] > > > > ????? > > ??????????????2???????17? > > ???100080 > > ????86 138 1180 3496 > > ??? > > ????86 10 6267 7000 > > ???shilei at qiyi.com > > ???*www.iQIYI.com * www.ppstream.com > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 25521 bytes Desc: not available URL: From lists at ssl-mail.com Thu Aug 11 13:03:58 2016 From: lists at ssl-mail.com (lists at ssl-mail.com) Date: Thu, 11 Aug 2016 06:03:58 -0700 Subject: ssl_trusted_certificate usage with parallel ECDSA / RSA certificates ? Message-ID: <1470920638.3250914.692423945.32ACD07E@webmail.messagingengine.com> I've created 2 LetsEncrypt SSL certs -- an EC & and RSA. Following Support for parallel ECDSA / RSA certificates https://trac.nginx.org/nginx/ticket/814 I config ssl_certificate "/etc/letsencrypt/live/example.com/fullchain.ec.pem"; ssl_certificate_key "/etc/ssl/keys/privkey_ec.pem"; ssl_certificate "/etc/letsencrypt/live/example.com/fullchain.rsa.pem"; ssl_certificate_key "/etc/ssl/keys/privkey_rsa.pem"; Although the trusted cert's not mentioned in ticket/814, the 'chain.pem' is what's used in nginx ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.ec.pem"; ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.rsa.pem"; But this config fails nginx config check nginx: [emerg] "ssl_trusted_certificate" directive is duplicate in /etc/nginx/sites-enabled/example.com.conf:50 nginx: configuration file /etc/nginx/nginx.conf test failed Commenting out one of the 2 ssl_trusted_cert stanzas ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.ec.pem"; # ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.rsa.pem"; and rerunning the check, it passes. In 'parallel' SSL mode, what's the correct usage for 'ssl_trusted_certificate'? Do I use one (ec), the other (rsa), or do you have to concatenate BOTH into one crt? From pluknet at nginx.com Thu Aug 11 13:24:41 2016 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 11 Aug 2016 16:24:41 +0300 Subject: ssl_trusted_certificate usage with parallel ECDSA / RSA certificates ? In-Reply-To: <1470920638.3250914.692423945.32ACD07E@webmail.messagingengine.com> References: <1470920638.3250914.692423945.32ACD07E@webmail.messagingengine.com> Message-ID: <353CAB4E-A91E-43ED-A555-2D4ABC5AD4E8@nginx.com> > On 11 Aug 2016, at 16:03, lists at ssl-mail.com wrote: > > I've created 2 LetsEncrypt SSL certs -- an EC & and RSA. > > Following > > Support for parallel ECDSA / RSA certificates > https://trac.nginx.org/nginx/ticket/814 > ssl_trusted_certificate is orthogonal to multiple certificates support. [..] > nginx: [emerg] "ssl_trusted_certificate" directive is duplicate in /etc/nginx/sites-enabled/example.com.conf:50 > nginx: configuration file /etc/nginx/nginx.conf test failed > > Commenting out one of the 2 ssl_trusted_cert stanzas > > ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.ec.pem"; > # ssl_trusted_certificate "/etc/letsencrypt/live/example.com/chain.rsa.pem"; > > and rerunning the check, it passes. > > In ?parallel? SSL mode, what?s the correct usage for ?ssl_trusted_certificate'? > The directive specifies a file with trusted CA certificates. See for details: http://nginx.org/r/ssl_trusted_certificate. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Thu Aug 11 21:20:08 2016 From: nginx-forum at forum.nginx.org (siddharth78) Date: Thu, 11 Aug 2016 17:20:08 -0400 Subject: ngx_http_limit_req_module questions In-Reply-To: References: Message-ID: <9669c50a97a0121ef4826b99e1751a09.NginxMailingListEnglish@forum.nginx.org> hi - did you ever get a response for this? did you end up using ngx_http_limit_req_module or something else? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,245334,268921#msg-268921 From nginx-forum at forum.nginx.org Thu Aug 11 21:25:09 2016 From: nginx-forum at forum.nginx.org (parthi) Date: Thu, 11 Aug 2016 17:25:09 -0400 Subject: 413 Request Entity Too Large Message-ID: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> have tried adding client_max_body_size 200M; inside all the context http, server and location but still users are unable to upload a file of size 30 MB and receive the above 413 request entity error.is there something i'm missing? reloaded and restarted but still no use.is there a bug or something like that in the version 1.4.6? The nginx serves as the front-end reverse proxy and behind is the tomcat server that house confluence ? novice 3 mins ago Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268922,268922#msg-268922 From nginx-forum at forum.nginx.org Fri Aug 12 05:48:13 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 12 Aug 2016 01:48:13 -0400 Subject: 413 Request Entity Too Large In-Reply-To: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> References: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <44894a5e0cc275063afc799ef4b68319.NginxMailingListEnglish@forum.nginx.org> The 413 is coming from your backend. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268922,268923#msg-268923 From jeff.dyke at gmail.com Fri Aug 12 18:08:55 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 12 Aug 2016 14:08:55 -0400 Subject: proxy_protocol - access server directly Message-ID: i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but i'd like to be able to access the servers directly on occasion and not through haproxy. Mainly this is done for troubleshooting or viewing a release before it goes out to the public (its off the LB at the time). Unfortunately accessing the server directly gives me a 400 and the logs show Broken Header error messages. Is there a way around this without removing proxy_protocol from the vhost configuration? Thanks minimal config: server { listen 443 ssl http2 default_server proxy_protocol; // other stuff set_real_ip_from XXX.XXX.XX.XX; set_real_ip_from NNN.NNN.NNN.NNN; real_ip_header proxy_protocol; // more stuff } Example error.log entry VX?www.example.com#" while reading PROXY protocol, client: YY.YY.YY.YY, server: 0.0.0.0:8000 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: "illegible characters" Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Fri Aug 12 18:29:16 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 12 Aug 2016 21:29:16 +0300 Subject: proxy_protocol - access server directly In-Reply-To: References: Message-ID: <20160812182916.GA77280@Romans-MacBook-Air.local> Hello, On Fri, Aug 12, 2016 at 02:08:55PM -0400, Jeff Dyke wrote: > i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but i'd > like to be able to access the servers directly on occasion and not through > haproxy. Mainly this is done for troubleshooting or viewing a release > before it goes out to the public (its off the LB at the time). > > Unfortunately accessing the server directly gives me a 400 and the logs > show Broken Header error messages. Is there a way around this without > removing proxy_protocol from the vhost configuration? > > Thanks > > minimal config: > server { > listen 443 ssl http2 default_server proxy_protocol; > // other stuff > set_real_ip_from XXX.XXX.XX.XX; > set_real_ip_from NNN.NNN.NNN.NNN; > real_ip_header proxy_protocol; > // more stuff > } > > Example error.log entry > VX?www.example.com#" while reading PROXY protocol, client: YY.YY.YY.YY, > server: 0.0.0.0:8000 > 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: "illegible > characters" You can add another "listen" directive without the proxy_protocol option. Nginx will always expect the PROXY protocol header if it's specified in the "listen" directive. -- Roman Arutyunyan From jeff.dyke at gmail.com Fri Aug 12 20:07:26 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 12 Aug 2016 16:07:26 -0400 Subject: proxy_protocol - access server directly In-Reply-To: <20160812182916.GA77280@Romans-MacBook-Air.local> References: <20160812182916.GA77280@Romans-MacBook-Air.local> Message-ID: Thank you Roman, i knew it would be painfully obvious once the solution was presented to me.... Very much appreciate it! Jeff On Fri, Aug 12, 2016 at 2:29 PM, Roman Arutyunyan wrote: > Hello, > > On Fri, Aug 12, 2016 at 02:08:55PM -0400, Jeff Dyke wrote: > > i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but i'd > > like to be able to access the servers directly on occasion and not > through > > haproxy. Mainly this is done for troubleshooting or viewing a release > > before it goes out to the public (its off the LB at the time). > > > > Unfortunately accessing the server directly gives me a 400 and the logs > > show Broken Header error messages. Is there a way around this without > > removing proxy_protocol from the vhost configuration? > > > > Thanks > > > > minimal config: > > server { > > listen 443 ssl http2 default_server proxy_protocol; > > // other stuff > > set_real_ip_from XXX.XXX.XX.XX; > > set_real_ip_from NNN.NNN.NNN.NNN; > > real_ip_header proxy_protocol; > > // more stuff > > } > > > > Example error.log entry > > VX?www.example.com#" while reading PROXY protocol, client: YY.YY.YY.YY, > > server: 0.0.0.0:8000 > > 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: "illegible > > characters" > > You can add another "listen" directive without the proxy_protocol option. > Nginx will always expect the PROXY protocol header if it's specified in the > "listen" directive. > > -- > Roman Arutyunyan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Fri Aug 12 20:49:47 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 12 Aug 2016 23:49:47 +0300 Subject: proxy_protocol - access server directly In-Reply-To: References: <20160812182916.GA77280@Romans-MacBook-Air.local> Message-ID: <20160812204947.GB77280@Romans-MacBook-Air.local> On Fri, Aug 12, 2016 at 04:07:26PM -0400, Jeff Dyke wrote: > Thank you Roman, i knew it would be painfully obvious once the solution was > presented to me.... > > Very much appreciate it! Just to clarify - you obviously have to specify another port in the new "listen" directive. > > Jeff > > On Fri, Aug 12, 2016 at 2:29 PM, Roman Arutyunyan wrote: > > > Hello, > > > > On Fri, Aug 12, 2016 at 02:08:55PM -0400, Jeff Dyke wrote: > > > i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but i'd > > > like to be able to access the servers directly on occasion and not > > through > > > haproxy. Mainly this is done for troubleshooting or viewing a release > > > before it goes out to the public (its off the LB at the time). > > > > > > Unfortunately accessing the server directly gives me a 400 and the logs > > > show Broken Header error messages. Is there a way around this without > > > removing proxy_protocol from the vhost configuration? > > > > > > Thanks > > > > > > minimal config: > > > server { > > > listen 443 ssl http2 default_server proxy_protocol; > > > // other stuff > > > set_real_ip_from XXX.XXX.XX.XX; > > > set_real_ip_from NNN.NNN.NNN.NNN; > > > real_ip_header proxy_protocol; > > > // more stuff > > > } > > > > > > Example error.log entry > > > VX?www.example.com#" while reading PROXY protocol, client: YY.YY.YY.YY, > > > server: 0.0.0.0:8000 > > > 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: "illegible > > > characters" > > > > You can add another "listen" directive without the proxy_protocol option. > > Nginx will always expect the PROXY protocol header if it's specified in the > > "listen" directive. > > > > -- > > Roman Arutyunyan > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From jeff.dyke at gmail.com Fri Aug 12 21:08:11 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 12 Aug 2016 17:08:11 -0400 Subject: proxy_protocol - access server directly In-Reply-To: <20160812204947.GB77280@Romans-MacBook-Air.local> References: <20160812182916.GA77280@Romans-MacBook-Air.local> <20160812204947.GB77280@Romans-MacBook-Air.local> Message-ID: On Fri, Aug 12, 2016 at 4:49 PM, Roman Arutyunyan wrote: > On Fri, Aug 12, 2016 at 04:07:26PM -0400, Jeff Dyke wrote: > > Thank you Roman, i knew it would be painfully obvious once the solution > was > > presented to me.... > > > > Very much appreciate it! > > Just to clarify - you obviously have to specify another port in the new > "listen" > directive. > > well not really, i just used the direct IP rather than the 0.0.0.0:443 or 443 listen directive and it all seemed to be fine. Should that cause issues going forward. nginx config test and restart was happy and tests on both sides of the site, api and www are good. Jeff > > > > Jeff > > > > On Fri, Aug 12, 2016 at 2:29 PM, Roman Arutyunyan > wrote: > > > > > Hello, > > > > > > On Fri, Aug 12, 2016 at 02:08:55PM -0400, Jeff Dyke wrote: > > > > i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but > i'd > > > > like to be able to access the servers directly on occasion and not > > > through > > > > haproxy. Mainly this is done for troubleshooting or viewing a > release > > > > before it goes out to the public (its off the LB at the time). > > > > > > > > Unfortunately accessing the server directly gives me a 400 and the > logs > > > > show Broken Header error messages. Is there a way around this without > > > > removing proxy_protocol from the vhost configuration? > > > > > > > > Thanks > > > > > > > > minimal config: > > > > server { > > > > listen 443 ssl http2 default_server proxy_protocol; > > > > // other stuff > > > > set_real_ip_from XXX.XXX.XX.XX; > > > > set_real_ip_from NNN.NNN.NNN.NNN; > > > > real_ip_header proxy_protocol; > > > > // more stuff > > > > } > > > > > > > > Example error.log entry > > > > VX?www.example.com#" while reading PROXY protocol, client: > YY.YY.YY.YY, > > > > server: 0.0.0.0:8000 > > > > 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: > "illegible > > > > characters" > > > > > > You can add another "listen" directive without the proxy_protocol > option. > > > Nginx will always expect the PROXY protocol header if it's specified > in the > > > "listen" directive. > > > > > > -- > > > Roman Arutyunyan > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Fri Aug 12 21:24:45 2016 From: arut at nginx.com (Roman Arutyunyan) Date: Sat, 13 Aug 2016 00:24:45 +0300 Subject: proxy_protocol - access server directly In-Reply-To: References: <20160812182916.GA77280@Romans-MacBook-Air.local> <20160812204947.GB77280@Romans-MacBook-Air.local> Message-ID: <20160812212445.GC77280@Romans-MacBook-Air.local> On Fri, Aug 12, 2016 at 05:08:11PM -0400, Jeff Dyke wrote: > On Fri, Aug 12, 2016 at 4:49 PM, Roman Arutyunyan wrote: > > > On Fri, Aug 12, 2016 at 04:07:26PM -0400, Jeff Dyke wrote: > > > Thank you Roman, i knew it would be painfully obvious once the solution > > was > > > presented to me.... > > > > > > Very much appreciate it! > > > > Just to clarify - you obviously have to specify another port in the new > > "listen" > > directive. > > > > well not really, i just used the direct IP rather than the 0.0.0.0:443 or > 443 listen directive and it all seemed to be fine. Should that cause > issues going forward. nginx config test and restart was happy and tests on > both sides of the site, api and www are good. That's fine too. > > Jeff > > > > > > > Jeff > > > > > > On Fri, Aug 12, 2016 at 2:29 PM, Roman Arutyunyan > > wrote: > > > > > > > Hello, > > > > > > > > On Fri, Aug 12, 2016 at 02:08:55PM -0400, Jeff Dyke wrote: > > > > > i have configured haproxy 1.6 and nginx 1.10.1 and all is well, but > > i'd > > > > > like to be able to access the servers directly on occasion and not > > > > through > > > > > haproxy. Mainly this is done for troubleshooting or viewing a > > release > > > > > before it goes out to the public (its off the LB at the time). > > > > > > > > > > Unfortunately accessing the server directly gives me a 400 and the > > logs > > > > > show Broken Header error messages. Is there a way around this without > > > > > removing proxy_protocol from the vhost configuration? > > > > > > > > > > Thanks > > > > > > > > > > minimal config: > > > > > server { > > > > > listen 443 ssl http2 default_server proxy_protocol; > > > > > // other stuff > > > > > set_real_ip_from XXX.XXX.XX.XX; > > > > > set_real_ip_from NNN.NNN.NNN.NNN; > > > > > real_ip_header proxy_protocol; > > > > > // more stuff > > > > > } > > > > > > > > > > Example error.log entry > > > > > VX?www.example.com#" while reading PROXY protocol, client: > > YY.YY.YY.YY, > > > > > server: 0.0.0.0:8000 > > > > > 2016/08/11 11:25:28 [error] 23818#23818: *1445 broken header: > > "illegible > > > > > characters" > > > > > > > > You can add another "listen" directive without the proxy_protocol > > option. > > > > Nginx will always expect the PROXY protocol header if it's specified > > in the > > > > "listen" directive. > > > > > > > > -- > > > > Roman Arutyunyan > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Roman Arutyunyan > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From nginx-forum at forum.nginx.org Sat Aug 13 14:12:53 2016 From: nginx-forum at forum.nginx.org (parthi) Date: Sat, 13 Aug 2016 10:12:53 -0400 Subject: 413 Request Entity Too Large In-Reply-To: <44894a5e0cc275063afc799ef4b68319.NginxMailingListEnglish@forum.nginx.org> References: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> <44894a5e0cc275063afc799ef4b68319.NginxMailingListEnglish@forum.nginx.org> Message-ID: How do you say that?. Have checked by reducing the client_max_body_size to 500K it works perfectly fine i'm not able to upload a file size of not more than 500K. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268922,268935#msg-268935 From r at roze.lv Sat Aug 13 17:04:11 2016 From: r at roze.lv (Reinis Rozitis) Date: Sat, 13 Aug 2016 20:04:11 +0300 Subject: 413 Request Entity Too Large In-Reply-To: References: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> <44894a5e0cc275063afc799ef4b68319.NginxMailingListEnglish@forum.nginx.org> Message-ID: <716D31901C5C4D1DAC872DE4E55C3E80@MezhRoze> > How do you say that?. Have checked by reducing the client_max_body_size to 500K it works perfectly fine i'm not able to upload a file size of not more than 500K. Is your Tomcat also configured to accept large POSTs? See maxPostSize in https://tomcat.apache.org/tomcat-5.5-doc/config/http.html rr From h.aboulfeth at genious.net Sat Aug 13 17:36:34 2016 From: h.aboulfeth at genious.net (Hamza Aboulfeth) Date: Sat, 13 Aug 2016 18:36:34 +0100 Subject: Weird problem with redirects In-Reply-To: <20160716074719.GR12280@daoine.org> References: <57895C6F.6080707@genious.Net> <20160716074719.GR12280@daoine.org> Message-ID: <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> Hello, We have formatted the server and installed everything over again, a week later the same problem occurred. All redirects are actually sent from time to time to another host: [root at genious106 ~]# curl -IL -H "host: hespress.com" xx.xx.xx.xx HTTP/1.1 301 Moved Permanently Server: nginx/1.10.1 Date: Sat, 13 Aug 2016 13:31:28 GMT Content-Type: text/html Content-Length: 185 Connection: keep-alive Location: http://1755118211 .com/ dbg-redirect: nginx HTTP/1.1 302 Found Server: nginx/1.2.1 Date: Sat, 13 Aug 2016 13:31:17 GMT Content-Type: text/html; charset=iso-8859-1 Connection: keep-alive Set-Cookie: orgje=2PUrADQAAgABACUhr1f__yUhr1dAAAEAAAAlIa9XMgACAAEAJSGvV___JSGvVwA-; expires=Sun, 13-Aug-2017 13:31:17 GMT; path=/; domain=traffsell.com Location: http://triuch.com/6lo1I HTTP/1.1 200 OK Server: nginx Date: Sat, 13 Aug 2016 13:31:17 GMT Content-Type: text/html; charset=utf-8 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding [root at genious106 ~]# Even php redirect requests are rerouted. Please advice, Hamza > On 16 juil. 2016, at 08:47, Francis Daly wrote: > > On Fri, Jul 15, 2016 at 10:58:07PM +0100, Hamza Aboulfeth wrote: > > Hi there, > >> I have a weird problem that suddenly appeared on a client's website >> yesterday. We have a redirection from non www to www and sometimes >> the redirection sends somewhere else: >> >> [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x > > If that x.x.x.x is enough to make sure that this request gets to your > nginx, then your nginx config is probably involved. > > If this only started yesterday, then changes since yesterday (or since > your nginx was last restarted before yesterday) are probably most > interesting. > > And as a very long shot: if you can "tcpdump" to see that nginx is sending > one thing, but the client is receiving something else, then you'll want > to look outside nginx at something else interfering with the traffic. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lagged at gmail.com Sat Aug 13 18:05:50 2016 From: lagged at gmail.com (Andrei) Date: Sat, 13 Aug 2016 21:05:50 +0300 Subject: Weird problem with redirects In-Reply-To: <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> References: <57895C6F.6080707@genious.Net> <20160716074719.GR12280@daoine.org> <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> Message-ID: Have a read over http://spam.tamagothi.de/tag/yandex/ for triuch.com related references then double check your content On Sat, Aug 13, 2016 at 8:36 PM, Hamza Aboulfeth wrote: > Hello, > > We have formatted the server and installed everything over again, a week > later the same problem occurred. All redirects are actually sent from time > to time to another host: > > [root at genious106 ~]# curl -IL -H "host: hespress.com" xx.xx.xx.xx > HTTP/1.1 301 Moved Permanently > Server: nginx/1.10.1 > Date: Sat, 13 Aug 2016 13:31:28 GMT > Content-Type: text/html > Content-Length: 185 > Connection: keep-alive > Location: http://1755118211 > .com/ > dbg-redirect: nginx > > HTTP/1.1 302 Found > Server: nginx/1.2.1 > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=iso-8859-1 > Connection: keep-alive > Set-Cookie: orgje=2PUrADQAAgABACUhr1f__yUhr1dAAAEAAAAlIa9XMgACAAEAJSGvV___JSGvVwA-; > expires=Sun, 13-Aug-2017 13:31:17 GMT; path=/; domain=traffsell.com > Location: http://triuch.com/6lo1I > > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=utf-8 > Connection: keep-alive > Vary: Accept-Encoding > Vary: Accept-Encoding > > [root at genious106 ~]# > > Even php redirect requests are rerouted. > > Please advice, > Hamza > > > On 16 juil. 2016, at 08:47, Francis Daly wrote: > > > > On Fri, Jul 15, 2016 at 10:58:07PM +0100, Hamza Aboulfeth wrote: > > > > Hi there, > > > >> I have a weird problem that suddenly appeared on a client's website > >> yesterday. We have a redirection from non www to www and sometimes > >> the redirection sends somewhere else: > >> > >> [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x > > > > If that x.x.x.x is enough to make sure that this request gets to your > > nginx, then your nginx config is probably involved. > > > > If this only started yesterday, then changes since yesterday (or since > > your nginx was last restarted before yesterday) are probably most > > interesting. > > > > And as a very long shot: if you can "tcpdump" to see that nginx is > sending > > one thing, but the client is receiving something else, then you'll want > > to look outside nginx at something else interfering with the traffic. > > > > Good luck with it, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Aug 14 06:24:07 2016 From: nginx-forum at forum.nginx.org (fffffffffffff) Date: Sun, 14 Aug 2016 02:24:07 -0400 Subject: proxy_cache:Why Receive >Transmit? Message-ID: structure: nginx<->backend server the nginx and backendserver enable gzip. nginx.conf: worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 1024; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; keepalive_timeout 120; gzip on; gzip_types application/json text/plain application/x-javascript application/javascript text/javascript text/css application/xml text/xml; gzip_min_length 1k; server_tokens off; upstream backend{ server xxx.xxx.xxx.xxx; keepalive 120; } proxy_temp_path /tmp/cache_tmp; proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache1:100m inactive=7d max_size=10g; server{ listen 80; location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Range $http_range; proxy_set_header If-Range $http_if_range; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Accept-Encoding "gzip"; proxy_cache cache1; proxy_cache_key $uri$is_args$args; proxy_cache_revalidate on; } } } Now, I use iftop watch, RX>TX, why ? When i disable proxy_cache , the RX=TX Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268938#msg-268938 From nginx-forum at forum.nginx.org Sun Aug 14 08:11:43 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 14 Aug 2016 04:11:43 -0400 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: References: Message-ID: When compressing (nginx) an already compressed stream (backend) that stream usually gets bigger. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268941#msg-268941 From vbart at nginx.com Sun Aug 14 10:18:59 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Aug 2016 13:18:59 +0300 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: References: Message-ID: <2534750.D3finmqIDp@vbart-laptop> On Sunday 14 August 2016 04:11:43 itpp2012 wrote: > When compressing (nginx) an already compressed stream (backend) that stream > usually gets bigger. > nginx doesn't use compression for already compressed responses. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Sun Aug 14 10:28:52 2016 From: nginx-forum at forum.nginx.org (fffffffffffff) Date: Sun, 14 Aug 2016 06:28:52 -0400 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: <2534750.D3finmqIDp@vbart-laptop> References: <2534750.D3finmqIDp@vbart-laptop> Message-ID: <8446b51a668e26fca72473efc92db17a.NginxMailingListEnglish@forum.nginx.org> yes, this is not a problem of compression, this is problem of proxy_cache, when i disable proxy_cache,RX=TX Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268943#msg-268943 From vbart at nginx.com Sun Aug 14 10:42:24 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Aug 2016 13:42:24 +0300 Subject: 413 Request Entity Too Large In-Reply-To: References: <1705497a9260e2d0226cdf021380a4ae.NginxMailingListEnglish@forum.nginx.org> <44894a5e0cc275063afc799ef4b68319.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2218235.Kmm6tm2RXG@vbart-laptop> On Saturday 13 August 2016 10:12:53 parthi wrote: > How do you say that?. Have checked by reducing the client_max_body_size to > 500K it works perfectly fine i'm not able to upload a file size of not more > than 500K. > In this case you have set the request body limit in nginx lower than on your backend server. wbr, Valentin V. Bartenev From vbart at nginx.com Sun Aug 14 10:52:46 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Aug 2016 13:52:46 +0300 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: <8446b51a668e26fca72473efc92db17a.NginxMailingListEnglish@forum.nginx.org> References: <2534750.D3finmqIDp@vbart-laptop> <8446b51a668e26fca72473efc92db17a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2625025.NQJtryPNKy@vbart-laptop> On Sunday 14 August 2016 06:28:52 fffffffffffff wrote: > yes, this is not a problem of compression, this is problem of proxy_cache, > when i disable proxy_cache,RX=TX > It's not clear what do you measure with iftop, even on which machine you have run it. In case of proxying the data are receiving and transmitting at the same time. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Sun Aug 14 11:17:19 2016 From: nginx-forum at forum.nginx.org (fffffffffffff) Date: Sun, 14 Aug 2016 07:17:19 -0400 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: <2625025.NQJtryPNKy@vbart-laptop> References: <2625025.NQJtryPNKy@vbart-laptop> Message-ID: <4fdcb33b5422842a1ed5276c6621f057.NginxMailingListEnglish@forum.nginx.org> I run iftop on nginx, you can look the capture image http://i.stack.imgur.com/bZmVI.png Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268946#msg-268946 From nginx-forum at forum.nginx.org Sun Aug 14 13:36:56 2016 From: nginx-forum at forum.nginx.org (fffffffffffff) Date: Sun, 14 Aug 2016 09:36:56 -0400 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: <4fdcb33b5422842a1ed5276c6621f057.NginxMailingListEnglish@forum.nginx.org> References: <2625025.NQJtryPNKy@vbart-laptop> <4fdcb33b5422842a1ed5276c6621f057.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ok, I think i know why , other question, why the cache files in the directory will become less? I set inactive=7d and proxy_cache_revalidate on; already Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268947#msg-268947 From nginx-forum at forum.nginx.org Sun Aug 14 15:20:52 2016 From: nginx-forum at forum.nginx.org (fffffffffffff) Date: Sun, 14 Aug 2016 11:20:52 -0400 Subject: proxy_cache:Why Receive >Transmit? In-Reply-To: References: <2625025.NQJtryPNKy@vbart-laptop> <4fdcb33b5422842a1ed5276c6621f057.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8f7aa69426ebabda6dc30a9b55ea4376.NginxMailingListEnglish@forum.nginx.org> On windows, cache files will become less, all cache files total size is 2G, I set max_size=20g,on linux it's ok Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268938,268948#msg-268948 From nginx-forum at forum.nginx.org Sun Aug 14 15:48:01 2016 From: nginx-forum at forum.nginx.org (Joe Curtis) Date: Sun, 14 Aug 2016 11:48:01 -0400 Subject: embeded php code not activating Message-ID: I have a weather station website running successfully under apache2 on a fedora based server. I am in the process of transferring it to run on a raspberry pi 3 under nginx. This has transferred without a problem with the exception of a small section of PHP code which loads a graphic of the moon phase selected by date. The html code calling the graphic is:- /var/www/html/weather/index.htm excerpt Moon moonphase.php calculates which image is required depending on parameters passed in moonphasetag.php. /var/www/html/weather/moonphase.php = 1 AND $mp < 49) { $file1 = 'moon/waxing_crescent.jpg'; copy ($file1, $file2); } elseif ($mp == 49 OR $mp == 50) { $file1 = 'moon/first_quarter.jpg'; copy ($file1, $file2); } elseif ($mp > 50 AND $mp < 99) { $file1 = 'moon/waxing_gibbous.jpg'; copy ($file1, $file2); } elseif ($mp == 99 OR $mp == 100) { $file1 = 'moon/full.jpg'; copy ($file1, $file2); } elseif ($mp >= -99 AND $mp <= -51) { $file1 = 'moon/waning_gibbous.jpg'; copy ($file1, $file2); } elseif($mp == -50 OR $mp == -49) { $file1 = 'moon/last_quarter.jpg'; copy ($file1, $file2); } elseif ($mp >= -49 AND $mp <= -1) { $file1 = 'moon/waning_crescent.jpg'; copy ($file1, $file2); } } else { if ($mp == 0) { $file1 = 'moon/new.jpg'; copy ($file1, $file2); } elseif ($mp >= 1 AND $mp < 49) { $file1 = 'moon/swaxing_crescent.jpg'; copy ($file1, $file2); } elseif ($mp == 49 OR $mp == 50) { $file1 = 'moon/sfirst_quarter.jpg'; copy ($file1, $file2); } elseif ($mp > 50 AND $mp < 99) { $file1 = 'moon/swaxing_gibbous.jpg'; copy ($file1, $file2); } elseif ($mp == 99 OR $mp == 100) { $file1 = 'moon/full.jpg'; copy ($file1, $file2); } elseif ($mp >= -99 AND $mp <= -51) { $file1 = 'moon/swaning_gibbous.jpg'; copy ($file1, $file2); } elseif ($mp == -50 OR $mp == -49) { $file1 = 'moon/slast_quarter.jpg'; copy ($file1, $file2); } elseif ($mp >= -49 AND $mp <= -1) { $file1 = 'moon/swaning_crescent.jpg'; copy ($file1, $file2); } } $img = imagecreatefromjpeg($file2); imagecopyresampled($imgFinal, $img, 0, 0, 0, 0, 64, 64, 64, 64); header('Content-Type: image/jpeg'); imagejpeg($imgFinal, null, 100); imagedestroy($imgFinal); imagedestroy($img); ?> As far as I can tell I have set all the nginx parameters correctly:- /etc/nginx/sites-enabled/weather server { listen 80; listen [::]:80; server_name www.craythorneweather.info; root /var/www/html/weather; index index.html index.htm index.php; access_log /var/log/nginx/weather.access_log; error_log /var/log/nginx/weather.error_log info; location / { try_files $uri $uri/ =404; } location ~* \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; include /etc/nginx/fastcgi_params; } } The code works perfectly under apache but I am keen to have it operating under nginx on the RPI. Any pointers as to where I am going wrong would be appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268949,268949#msg-268949 From r1ch+nginx at teamliquid.net Sun Aug 14 16:42:06 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sun, 14 Aug 2016 18:42:06 +0200 Subject: embeded php code not activating In-Reply-To: References: Message-ID: Visiting http://www.craythorneweather.info/moonphase.php show a HTTP/500, so you should examine your backend (PHP) error logs for more information. On Sun, Aug 14, 2016 at 5:48 PM, Joe Curtis wrote: > I have a weather station website running successfully under apache2 on a > fedora based server. I am in the process of transferring it to run on a > raspberry pi 3 under nginx. This has transferred without a problem with the > exception of a small section of PHP code which loads a graphic of the moon > phase selected by date. > > The html code calling the graphic is:- > > /var/www/html/weather/index.htm excerpt > > Moon align="left" > border="0" height="64" hspace="10" /> > > moonphase.php calculates which image is required depending on parameters > passed in moonphasetag.php. > > /var/www/html/weather/moonphase.php > > > error_reporting(E_ALL); > > require "moonphasetag.php"; > function int($s){return(int)preg_replace('/[^\-\d]*(\-?\d*).*/','$1',$s);} > $mp = int($MoonPercent); > $lat = substr($latitude, 0, 1); > $file1 = 'moon/back.jpg'; > $file2 = 'moon/moon.jpg'; > $img = @imagecreatetruecolor(64, 64); > $imgFinal = @imagecreatetruecolor(64, 64); > if ($lat == "N") { > if ($mp == 0) { > $file1 = 'moon/new.jpg'; > copy ($file1, $file2); > } elseif ($mp >= 1 AND $mp < 49) { > $file1 = 'moon/waxing_crescent.jpg'; > copy ($file1, $file2); > } elseif ($mp == 49 OR $mp == 50) { > $file1 = 'moon/first_quarter.jpg'; > copy ($file1, $file2); > } elseif ($mp > 50 AND $mp < 99) { > $file1 = 'moon/waxing_gibbous.jpg'; > copy ($file1, $file2); > } elseif ($mp == 99 OR $mp == 100) { > $file1 = 'moon/full.jpg'; > copy ($file1, $file2); > } elseif ($mp >= -99 AND $mp <= -51) { > $file1 = 'moon/waning_gibbous.jpg'; > copy ($file1, $file2); > } elseif($mp == -50 OR $mp == -49) { > $file1 = 'moon/last_quarter.jpg'; > copy ($file1, $file2); > } elseif ($mp >= -49 AND $mp <= -1) { > $file1 = 'moon/waning_crescent.jpg'; > copy ($file1, $file2); > } > } else { > if ($mp == 0) { > $file1 = 'moon/new.jpg'; > copy ($file1, $file2); > } elseif ($mp >= 1 AND $mp < 49) { > $file1 = 'moon/swaxing_crescent.jpg'; > copy ($file1, $file2); > } elseif ($mp == 49 OR $mp == 50) { > $file1 = 'moon/sfirst_quarter.jpg'; > copy ($file1, $file2); > } elseif ($mp > 50 AND $mp < 99) { > $file1 = 'moon/swaxing_gibbous.jpg'; > copy ($file1, $file2); > } elseif ($mp == 99 OR $mp == 100) { > $file1 = 'moon/full.jpg'; > copy ($file1, $file2); > } elseif ($mp >= -99 AND $mp <= -51) { > $file1 = 'moon/swaning_gibbous.jpg'; > copy ($file1, $file2); > } elseif ($mp == -50 OR $mp == -49) { > $file1 = 'moon/slast_quarter.jpg'; > copy ($file1, $file2); > } elseif ($mp >= -49 AND $mp <= -1) { > $file1 = 'moon/swaning_crescent.jpg'; > copy ($file1, $file2); > } > } > $img = imagecreatefromjpeg($file2); > imagecopyresampled($imgFinal, $img, 0, 0, 0, 0, 64, 64, 64, 64); > header('Content-Type: image/jpeg'); > imagejpeg($imgFinal, null, 100); > imagedestroy($imgFinal); > imagedestroy($img); > ?> > > As far as I can tell I have set all the nginx parameters correctly:- > > /etc/nginx/sites-enabled/weather > > server { > listen 80; > listen [::]:80; > > server_name www.craythorneweather.info; > > root /var/www/html/weather; > index index.html index.htm index.php; > > access_log /var/log/nginx/weather.access_log; > error_log /var/log/nginx/weather.error_log info; > > location / { > try_files $uri $uri/ =404; > } > location ~* \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include /etc/nginx/fastcgi.conf; > include /etc/nginx/fastcgi_params; > } > } > > The code works perfectly under apache but I am keen to have it operating > under nginx on the RPI. > > Any pointers as to where I am going wrong would be appreciated. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,268949,268949#msg-268949 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Mon Aug 15 00:03:21 2016 From: larry.martell at gmail.com (Larry Martell) Date: Sun, 14 Aug 2016 20:03:21 -0400 Subject: debugging 504 Gateway Time-out In-Reply-To: References: Message-ID: On Tue, Aug 9, 2016 at 10:35 PM, Richard Stanway wrote: >> generated 0 bytes in 640738 msecs > > I would look into what is causing your backend to take over 10 minutes to > respond to that request. I have some requests that can take a long time to return - the users can request huge amount of data to be pulled from very large database tables with complex filters. But what I don't understand it how the nginx timeout works. My config file has this: proxy_read_timeout 600; proxy_connect_timeout 600; proxy_send_timeout 600; send_timeout 600; That's 10 minutes, right? But I get the 504 response before 10 minutes have passed since the request is sent. Why is that? > > On Tue, Aug 9, 2016 at 11:09 PM, Larry Martell > wrote: >> >> I just set up a django site with nginx and uWSGI. Some pages I go to >> work fine, but other fail with a 504 Gateway Time-out. I used to >> serve this site with apache and wsgi and these same pages worked fine. >> >> This is what I see in the nginx error log: >> >> 2016/08/09 16:40:19 [error] 17345#0: *1 upstream timed out (110: >> Connection timed out) while reading response header from upstream, >> client: 10.250.147.59, server: localhost, request: "GET >> >> /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report >> HTTP/1.1", upstream: "uwsgi://unix:///usr/local/motor/motor.sock", >> host: "xx.xx.xx.xx", referrer: >> "http://xx.xx.xx.xx/report/CDSEM/MeasurementData/" >> >> When this happens I see this in the uwsgi error log: >> >> Tue Aug 9 16:42:57 2016 - >> uwsgi_response_writev_headers_and_body_do(): Broken pipe >> [core/writer.c line 296] during GET >> >> /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report >> (10.250.147.59) >> IOError: write error >> [pid: 9230|app: 0|req: 36/155] 10.250.147.59 () {46 vars in 1333 >> bytes} [Tue Aug 9 16:32:16 2016] GET >> >> /report/CDSEM/MeasurementData/?group=&target_name=&recipe=&ep=&ppl=&roi_name=&lot=&date_time=8%2F1&tool_ids=23&field_1=Tool&field_2=Target&field_3=Recipe&field_4=Ep&field_5=Lot&field_6=Date+Time&field_7=Bottom&submit_preview=Generate+Report >> => generated 0 bytes in 640738 msecs (HTTP/1.1 200) 4 headers in 0 >> bytes (1 switches on core 0) >> >> Note the weird timestamps. The first uwsgi message is more then 2 >> minutes after the nginx message. And the second uwsgi message has a >> timestamp before the previous uwsgi message. What's up with that?? >> >> Here is my nginx config: >> >> worker_processes 1; >> >> events { >> worker_connections 1024; >> } >> >> http { >> include mime.types; >> default_type application/octet-stream; >> keepalive_timeout 65; >> sendfile on; >> >> # set client body size to 20M >> client_max_body_size 20M; >> >> include /etc/nginx/sites-enabled/*; >> } >> >> >> and here is my local site file: >> >> # motor_nginx.conf >> >> # the upstream component nginx needs to connect to >> upstream django { >> server unix:///usr/local/motor/motor.sock; # for a file socket >> } >> >> # configuration of the server >> server { >> # the port your site will be served on >> listen 80; >> # the domain name it will serve for >> server_name localhost; >> charset utf-8; >> >> # max upload size >> client_max_body_size 75M; # adjust to taste >> >> proxy_read_timeout 600; >> proxy_connect_timeout 600; >> proxy_send_timeout 600; >> send_timeout 600; >> >> # Django media >> location /media { >> alias /usr/local/motor/motor/media; >> } >> >> location /static { >> alias /usr/local/motor/motor/static; >> } >> >> # Finally, send all non-media requests to the Django server. >> location / { >> uwsgi_pass django; >> include /usr/local/motor/motor/uwsgi_params; >> } >> } >> >> How can I debug or fix this? >> >> Thanks! From francis at daoine.org Mon Aug 15 08:05:45 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Aug 2016 09:05:45 +0100 Subject: debugging 504 Gateway Time-out In-Reply-To: References: Message-ID: <20160815080545.GN12280@daoine.org> On Sun, Aug 14, 2016 at 08:03:21PM -0400, Larry Martell wrote: > On Tue, Aug 9, 2016 at 10:35 PM, Richard Stanway > wrote: Hi there, > I have some requests that can take a long time to return - the users > can request huge amount of data to be pulled from very large database > tables with complex filters. But what I don't understand it how the > nginx timeout works. My config file has this: > > proxy_read_timeout 600; > proxy_connect_timeout 600; > proxy_send_timeout 600; > send_timeout 600; > > That's 10 minutes, right? But I get the 504 response before 10 minutes > have passed since the request is sent. Why is that? The documentation for each of those directives can be found at urls of the form http://nginx.org/r/proxy_read_timeout Most likely, the proxy_* ones are not used because you do not have a matching proxy_pass. You use uwsgi_pass. So investigate directives like uwsgi_read_timeout, at http://nginx.org/r/uwsgi_read_timeout Cheers, f -- Francis Daly francis at daoine.org From redeemerofsouls666 at web.de Mon Aug 15 12:32:46 2016 From: redeemerofsouls666 at web.de (Max Meyer) Date: Mon, 15 Aug 2016 14:32:46 +0200 Subject: HTTP/2 without forward secrecy (Diffie-Hellman) Message-ID: Hi, for a test environment I successfully set up an nginx webserver (1.11.2) with HTTP/2. But for further tests I need to decrypt traffic with wireshark using the servers private key. For that I need to disable forward secrecy (since it is only a test environment security is not an issue) So I changed the "ssl_ciphers" in my /sites-enabled/default file from: ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; into ssl_ciphers "AES128-SHA"; So my configuration looks like this: ----- server { listen 443 http2; root /var/www/html; index index.php index.html index.htm; ssl on; ssl_certificate /etc/ssl/server.crt; ssl_certificate_key /etc/ssl/private.key; ssl_protocols TLSv1.2; # ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_ciphers "AES128-SHA"; ssl_prefer_server_ciphers on; } ----- But now the server won't do HTTP/2 anymore, it falls back to HTTP/1.1. I tried the same with an Apache webserver and it worked fine, so I guess it is not a general problem with the chosen cipher. Any ideas on what could be the problem? thanks! From luky-37 at hotmail.com Mon Aug 15 13:04:21 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 15 Aug 2016 13:04:21 +0000 Subject: AW: HTTP/2 without forward secrecy (Diffie-Hellman) In-Reply-To: References: Message-ID: Hi, > for a test environment I successfully set up an nginx webserver (1.11.2) > with HTTP/2. > > But for further tests I need to decrypt traffic with wireshark using the > servers private key. The way to do this is to use keyfile from your browser, so wireshark is aware of the symmetric key used for the session. See [1] and [2]. > For that I need to disable forward secrecy (since it is only a test > environment security is not an issue) > > So I changed the "ssl_ciphers" in my /sites-enabled/default file from: > > ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; > into > ssl_ciphers "AES128-SHA"; This cannot work, HTTP/2.0 only always certain ciphers [3]. The fact the it works in Apache means Apache violates the RFC. Also see nginx manual [4]. Regards, Lukas [1] https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/ [2] https://wiki.wireshark.org/SSL [3] http://http2.github.io/http2-spec/#TLSUsage [4] http://nginx.org/en/docs/http/ngx_http_v2_module.html#example From vbart at nginx.com Mon Aug 15 13:58:50 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 15 Aug 2016 16:58:50 +0300 Subject: HTTP/2 without forward secrecy (Diffie-Hellman) In-Reply-To: References: Message-ID: <3094337.F11FzrL6tn@vbart-workstation> On Monday 15 August 2016 14:32:46 Max Meyer wrote: [..] > But now the server won't do HTTP/2 anymore, it falls back to HTTP/1.1. > I tried the same with an Apache webserver and it worked fine, so I guess > it is not a general problem with the chosen cipher. > > Any ideas on what could be the problem? > Nginx doesn't do anything special related to ciphers and HTTP/2. The difference probably caused by different OpenSSL versions, or different clients used in these cases. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Aug 16 08:22:51 2016 From: nginx-forum at forum.nginx.org (robbanp) Date: Tue, 16 Aug 2016 04:22:51 -0400 Subject: Logging application errors in the wrong place Message-ID: Hi, I use Nginx with Passenger and the problem I have is that I get "some" application log entries in /var/log/nginx/error.log. The problem is that the log is turned off and also pointing to another location (/mnt/nginx/logs/): My configs: nginx.conf http error_log /mnt/nginx/logs/error.log notice; access_log /mnt/nginx/logs/access.log main; application.conf server passenger_enabled on; passenger_set_header HTTP_X_QUEUE_START "t=${msec}000"; passenger_set_header X-Unique-Request-Id "t=${msec}000"; passenger_set_header X-Real-IP $remote_addr; passenger_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10M; access_log off; error_log off; Any ideas? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268986,268986#msg-268986 From zeal at freecharge.com Tue Aug 16 09:10:21 2016 From: zeal at freecharge.com (Zeal Vora) Date: Tue, 16 Aug 2016 14:40:21 +0530 Subject: Whitelisting IPs from Certificate Based Authentication Message-ID: Hi We have a Certificate Based Authentication for one of our websites. We want that if users visit from Office IP's then they should not have to go via Certificate Based Authentication. Rest for all, Authentication is necessary. What would be the ideal way of doing this ? I believe $remote_addr can play role in this. *My current sample Nginx config :*- server { listen 80; ssl_client_certificate /backup/ca.crt; ssl_verify_client on; location / { root /var/www/html; index index.html; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Aug 16 11:19:35 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 16 Aug 2016 07:19:35 -0400 Subject: Disabling HTTP/2 for a specific location Message-ID: I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload module don't support HTTP/2 and thus when you upload you get 500 Internal Error. For now i am trying to use a separate server block to disable http2 just for the upload and enable it for the rest server { listen 443; server_name mywebsite.com/upload; .... } server { listen 443 http2 default_server reuseport deferred; server_name mywebsite.com; ... } However its not working as i expected as nginx is still using HTT/2 for the /upload location Regards P.S In case you have some time maybe you could suggest a patch to make it work ( https://github.com/Austinb/nginx-upload-module/blob/2.2/ngx_http_upload_module.c) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268988,268988#msg-268988 From luky-37 at hotmail.com Tue Aug 16 11:37:32 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 16 Aug 2016 11:37:32 +0000 Subject: AW: Disabling HTTP/2 for a specific location In-Reply-To: References: Message-ID: > I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload > module don't support HTTP/2 and thus when you upload you get 500 Internal > Error. Use a dedicated subdomain, like upload.mywebsite.com. > For now i am trying to use? a separate server block to disable http2 just > for the upload and enable it for the rest This cannot work. The protocol is set in stone before a request is emitted, therefor you cannot select the protocol based on the location. > server_name mywebsite.com/upload; That's not a valid server_name. A server_name is a hostname. Lukas From jim at ohlste.in Tue Aug 16 12:02:57 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 16 Aug 2016 08:02:57 -0400 Subject: AW: Disabling HTTP/2 for a specific location In-Reply-To: References: Message-ID: <489600a0-edcb-ba5d-29e1-ba9d4fb27de9@ohlste.in> Hello, On 08/16/16 07:37, Lukas Tribus wrote: >> I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload >> module don't support HTTP/2 and thus when you upload you get 500 Internal >> Error. > > Use a dedicated subdomain, like upload.mywebsite.com. This will not work unless the subdomain is also on a different IP and http2 is not enabled on that IP. If http2 is enabled for an IP then http2 is enabled on all servers listening on that IP, whether explicitly enabled or not. > > > >> For now i am trying to use a separate server block to disable http2 just >> for the upload and enable it for the rest > > This cannot work. The protocol is set in stone before a request is emitted, therefor you cannot select the protocol based on the location. > > > >> server_name mywebsite.com/upload; > > That's not a valid server_name. A server_name is a hostname. -- Jim From luky-37 at hotmail.com Tue Aug 16 12:25:20 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 16 Aug 2016 12:25:20 +0000 Subject: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: <489600a0-edcb-ba5d-29e1-ba9d4fb27de9@ohlste.in> References: , <489600a0-edcb-ba5d-29e1-ba9d4fb27de9@ohlste.in> Message-ID: Hello, On 08/16/16 07:37, Lukas Tribus wrote: >> I use nginx 1.11.3 with nginx upload module.The problem is that Nginx upload >> module don't support HTTP/2 and thus when you upload you get 500 Internal >> Error. > >> Use a dedicated subdomain, like upload.mywebsite.com. > > This will not work unless the subdomain is also on a different IP and > http2 is not enabled on that IP. If http2 is enabled for an IP then > http2 is enabled on all servers listening on that IP, whether explicitly > enabled or not. Right, I missed this one. A different workaround would be to recirculate the request to a http only configuration (proxying location /upload to a http only port on localhost). Lukas From nginx-forum at forum.nginx.org Tue Aug 16 13:50:28 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 16 Aug 2016 09:50:28 -0400 Subject: Nginx | fastcgi_cache_key $http_cookie for Joomla Message-ID: <0542073563cc00349de6bd31c79bd6ec.NginxMailingListEnglish@forum.nginx.org> So i found the following for Drupal https://forum.nginx.org/read.php?2,220510,220563#msg-220563 http { map $http_cookie $session_id { default ''; ~SESS(?[[:alnum:]]+) $session_guid; } } server { location ~ \.php$ { fastcgi_cache_key $session_cookie$request_method$scheme$host$request_uri; } } And want to implement it for Joomla but Joomla's cookies regex do not contain "SESS", Wanted some help to modify this for a Joomla environment so the pages cached and served when registered users are logged in they will only receive their own pages because of the cookie matching with the key. Example Joomla based site that session cookie format can be seen. http://www.networkflare.com/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268992,268992#msg-268992 From reallfqq-nginx at yahoo.fr Tue Aug 16 13:55:13 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 16 Aug 2016 15:55:13 +0200 Subject: HTTP/2 without forward secrecy (Diffie-Hellman) In-Reply-To: References: Message-ID: On Mon, Aug 15, 2016 at 3:04 PM, Lukas Tribus wrote: > > For that I need to disable forward secrecy (since it is only a test > > environment security is not an issue) > > > > So I changed the "ssl_ciphers" in my /sites-enabled/default file from: > > > > ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; > > into > > ssl_ciphers "AES128-SHA"; > > This cannot work, HTTP/2.0 only always certain ciphers [3]. The fact the > it works in Apache means Apache violates the RFC. > > Also see nginx manual [4]. > ??That is a wrong assumption and an inadequate blame on Apache. The list you are mentioning and which is directly linked in the nginx example you referenced (RFC 7540, Appendix A )? ?uses the MAY keyword, defined as 'truly optional'. nginx has made the choice? of strictly following RFC advice, but technology who don't make no violation *per se*. ? > [3] http://http2.github.io/http2-spec/#TLSUsage > [4] http://nginx.org/en/docs/http/ngx_http_v2_module.html#example ? Thus, this configuration *can* work and the problem is definitely elsewhere (cf. Valentin message for example). --- *B. R.*? -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Aug 16 14:05:15 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Aug 2016 17:05:15 +0300 Subject: HTTP/2 without forward secrecy (Diffie-Hellman) In-Reply-To: References: Message-ID: <12472501.tc57Tce4xt@vbart-workstation> On Tuesday 16 August 2016 15:55:13 B.R. wrote: [..] > nginx has made the choice? of strictly following RFC advice [..] This is a false statement, nginx doesn't do any restriction regarding HTTP/2 and TLS ciphers configuration. wbr, Valentin V. Bartenev From luky-37 at hotmail.com Tue Aug 16 15:11:55 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 16 Aug 2016 15:11:55 +0000 Subject: AW: HTTP/2 without forward secrecy (Diffie-Hellman) In-Reply-To: <12472501.tc57Tce4xt@vbart-workstation> References: , <12472501.tc57Tce4xt@vbart-workstation> Message-ID: > This is a false statement, nginx doesn't do any restriction > regarding HTTP/2 and TLS ciphers configuration. Good thing, likely the restriction is on the browser side and Apache was not configured with the same exact cipher suite. > The list you are mentioning and which is directly linked in the nginx > example uses the MAY keyword The MAY keyword is regarding the* error handling in case the cipher is blacklisted*, but it is section 9.2.2 of the RFC that defines the behavior, and uses "SHOULD NOT". Still not a violation of the RFC, you are right. An indeed it seems this part of the RFC is implemented on the browser side, rather than on the server. Be that as it may, the configuration is invalid for HTTP/2, and here is the *MUST*: > deployments of HTTP/2 that use TLS 1.2 *MUST* support > TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 [TLS-ECDHE] > with the P-256 elliptic curve [FIPS186]. So as I said initially, using keyfiles is the way to go, you cannot always change your production configuration for a sniff anyway, and you may not always have access to the server. So better get familiar with the keyfile handling and be done with it. Lukas From nginx-forum at forum.nginx.org Tue Aug 16 15:56:22 2016 From: nginx-forum at forum.nginx.org (khalifanizar) Date: Tue, 16 Aug 2016 11:56:22 -0400 Subject: Nginx remote access_log file Message-ID: <91a765a0bb0a54deae91246cb64b90a5.NginxMailingListEnglish@forum.nginx.org> Hello, Actually, I have Nginx and Logstash installed on the same machine. And i want to separate them. I was installed Logstash in an other machine. How can i store the accsess_log Nginx file in the second machine ? Or how can i set the a Logstash remote input file ? Should I user an other tool like logsatsh-forwarder ? Best regards, Nizar. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268997,268997#msg-268997 From r at roze.lv Tue Aug 16 17:06:42 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 16 Aug 2016 20:06:42 +0300 Subject: Nginx remote access_log file In-Reply-To: <91a765a0bb0a54deae91246cb64b90a5.NginxMailingListEnglish@forum.nginx.org> References: <91a765a0bb0a54deae91246cb64b90a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5747E898D1C14F889D8AB12C2EA0AD57@NeiRoze> > How can i store the accsess_log Nginx file in the second machine ? > Or how can i set the a Logstash remote input file ? > Should I user an other tool like logsatsh-forwarder ? You can just write the nginx access log via syslog directly to logstash (or if you don't want that nginx writes something over network pretty much all syslog daemons (syslog-ng, rsyslog etc) support of reading local files and piping them to a remote listener). A generic example: 1. define the input (to whatever port you prefer) in logstash config: input { udp { port => 5000 type => syslog } } 2. point the nginx log to it: access_log syslog:server=your.logstash.ip:5000; (more details about tagging the messages etc in http://nginx.org/en/docs/syslog.html ) rr From mikydevel at yahoo.fr Wed Aug 17 12:05:24 2016 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 17 Aug 2016 12:05:24 +0000 (UTC) Subject: Problem with SSL handshake References: <1712616749.26162304.1471435524380.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1712616749.26162304.1471435524380.JavaMail.yahoo@mail.yahoo.com> nginx version: 1.6.2 Hello, The client and Nginx server seem to have problem to establish a SSL connection. In the logs I have this[crit] 18386#0: *1 SSL_do_handshake() failed (SSL: error:14094456:SSL routines:SSL3_READ_BYTES:tlsv1 unsupported extension:SSL alert number 110) whle SSL handshaking, client: @IP_client, server: 0.0.0.0:443I have searched this message on google but couldn't see anything that would help My vhost configurationserver { ??????? listen 80; ??????? listen 443 ssl;??????? server_name www.example.org; ...?????? ssl? on; ?????? ssl_certificate???????? /etc/ssl/certs/cert.crt; ?????? ssl_certificate_key???? /etc/ssl/private/key.key;??????? ssl_session_cache????? shared:SSL:10m;} Do you know what could be wrong and where should I dig to solve this problem. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Aug 17 16:03:48 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Wed, 17 Aug 2016 12:03:48 -0400 Subject: Start nginx worker process with same user as master process Message-ID: I have setup nginx master process with a user , lets say user1. In the nginx.conf, user directive is mentioned with nginx. (user nginx;) I understand that worker processes will be spawned by user nginx when I start nginx. Is there a way to start worker processes also as user1 without changing anything in the nginx.conf file ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269006,269006#msg-269006 From francis at daoine.org Wed Aug 17 18:10:10 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Aug 2016 19:10:10 +0100 Subject: Start nginx worker process with same user as master process In-Reply-To: References: Message-ID: <20160817181010.GO12280@daoine.org> On Wed, Aug 17, 2016 at 12:03:48PM -0400, Sushma wrote: Hi there, > I have setup nginx master process with a user , lets say user1. By that, do you mean "you run nginx as user1", or something else? > In the nginx.conf, user directive is mentioned with nginx. (user nginx;) > I understand that worker processes will be spawned by user nginx when I > start nginx. Only if user "user1" has permission to change userid to user "nginx". > Is there a way to start worker processes also as user1 without changing > anything in the nginx.conf file ? When I try it, I see nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/nginx/conf/nginx.conf:6 and my master process and my worker process are running under the same uid as each other. Do you see something different? f -- Francis Daly francis at daoine.org From junk at slact.net Wed Aug 17 21:36:00 2016 From: junk at slact.net (nobody) Date: Wed, 17 Aug 2016 21:36:00 +0000 Subject: How to calculate used shared memory in slab? Message-ID: <1665b2da-93bc-d59e-6d0e-a20be28b9a57@slact.net> Hello, I'm the developer of Nchan (previously known as the http push module) -- https://github.com/slact/nchan. I'm writing a stub_status - like stats output handler, and I want to output the amount of shared memory actually used. I could keep track of the raw bytes allocated with wrappers to ngx_slab_alloc, but that does not produce accurate numbers because the slab allocator works on pages. I'm having some trouble understanding the internal working of the slab allocator. Is there a straightforward way to get the number of pages currently in use? Thanks, Leo From mdounin at mdounin.ru Wed Aug 17 23:12:34 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Aug 2016 02:12:34 +0300 Subject: Problem with SSL handshake In-Reply-To: <1712616749.26162304.1471435524380.JavaMail.yahoo@mail.yahoo.com> References: <1712616749.26162304.1471435524380.JavaMail.yahoo.ref@mail.yahoo.com> <1712616749.26162304.1471435524380.JavaMail.yahoo@mail.yahoo.com> Message-ID: <20160817231234.GA24741@mdounin.ru> Hello! On Wed, Aug 17, 2016 at 12:05:24PM +0000, Mik J wrote: > nginx version: 1.6.2 > Hello, > The client and Nginx server seem to have problem to establish a SSL connection. In the logs I have this[crit] 18386#0: *1 SSL_do_handshake() failed (SSL: error:14094456:SSL routines:SSL3_READ_BYTES:tlsv1 unsupported extension:SSL alert number 110) whle SSL handshaking, client: @IP_client, server: 0.0.0.0:443I have searched this message on google but couldn't see anything that would help > My vhost configurationserver { > ??????? listen 80; > ??????? listen 443 ssl;??????? server_name www.example.org; > ...?????? ssl? on; Note: such a configuration is invalid and will try to negotiate SSL on the port 80. You should remove "ssl on", just "listen ... ssl" on appropriate sockets is enough. See http://nginx.org/en/docs/http/configuring_https_servers.html for details. > ?????? ssl_certificate???????? /etc/ssl/certs/cert.crt; > ?????? ssl_certificate_key???? /etc/ssl/private/key.key;??????? ssl_session_cache????? shared:SSL:10m;} > Do you know what could be wrong and where should I dig to solve this problem. The message suggests that the client aborted the connection. The reason claimed is defined as follows, https://tools.ietf.org/html/rfc5246#section-7.2.2: unsupported_extension sent by clients that receive an extended server hello containing an extension that they did not put in the corresponding client hello. This message is always fatal. You may try looking at the handshake using Wireshark to see if it's indeed what happens. You may also try looking for additional information on the client side. Quick search suggests such errors previously appeared due to bugs in OpenSSL beta versions, see, e.g., here: http://openssl.6102.n7.nabble.com/1-0-1beta1-incompatibility-with-gnutls-td8366.html If you are using some attic version of OpenSSL (much like the version of nginx you are using), it may be a good idea to check if an upgrade fixes things. This also can be a bug in the client. In this case, probably disabling TLS via ssl_protocols is the only option if you want to support the client, though it's not a solution to be used nowadays. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Aug 18 07:48:31 2016 From: nginx-forum at forum.nginx.org (leeyiw) Date: Thu, 18 Aug 2016 03:48:31 -0400 Subject: Async operation in pre-access phase? Message-ID: <39131cb90c4ef31dc9a6c1bd6c6074aa.NginxMailingListEnglish@forum.nginx.org> Hi, everyone: I want to develop a module that send/recv in pre-access handler asynchronously, and my pre-access handler will return NGX_AGAIN. My question is when my send/recv callback is called, how can I continue the request? By default it's block forever. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269032,269032#msg-269032 From nginx-forum at forum.nginx.org Thu Aug 18 16:53:05 2016 From: nginx-forum at forum.nginx.org (khav) Date: Thu, 18 Aug 2016 12:53:05 -0400 Subject: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: References: Message-ID: <6a3cb2dfbf6ad87a7aa3326b3f5df9cd.NginxMailingListEnglish@forum.nginx.org> @Lukas do you mean something like this location = /upload { proxy_pass http://mywebsite.com/upload; } server { listen 80; server_name mywebsite.com; location = /upload { } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268988,269037#msg-269037 From nginx-forum at forum.nginx.org Thu Aug 18 18:20:37 2016 From: nginx-forum at forum.nginx.org (tungstone) Date: Thu, 18 Aug 2016 14:20:37 -0400 Subject: Full Filename Directory Listing In-Reply-To: References: <8bde2327465f7f11e202fe40328c3d00.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sincere and real appreciation fagtron Can you find us a tip to edit src / http / modules / ngx_http_autoindex_module.c so you do not have columns dating and size? And only the file name? Whether your summer you be good! tungstone Posted at Nginx Forum: https://forum.nginx.org/read.php?2,124400,269038#msg-269038 From luky-37 at hotmail.com Thu Aug 18 19:59:14 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 18 Aug 2016 19:59:14 +0000 Subject: AW: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: <6a3cb2dfbf6ad87a7aa3326b3f5df9cd.NginxMailingListEnglish@forum.nginx.org> References: , <6a3cb2dfbf6ad87a7aa3326b3f5df9cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: > @Lukas do you mean something like this Yes, that's what I mean. Lukas From mdounin at mdounin.ru Thu Aug 18 20:04:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Aug 2016 23:04:29 +0300 Subject: Async operation in pre-access phase? In-Reply-To: <39131cb90c4ef31dc9a6c1bd6c6074aa.NginxMailingListEnglish@forum.nginx.org> References: <39131cb90c4ef31dc9a6c1bd6c6074aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160818200429.GG24741@mdounin.ru> Hello! On Thu, Aug 18, 2016 at 03:48:31AM -0400, leeyiw wrote: > Hi, everyone: > I want to develop a module that send/recv in pre-access handler > asynchronously, and my pre-access handler will return NGX_AGAIN. > My question is when my send/recv callback is called, how can I continue the > request? By default it's block forever. Try looking into limit_req module, src/http/modules/ngx_http_limit_req_module.c, it contains a relevant code. In general, you have call ngx_http_core_run_phases(r); once you've done with your async processing. -- Maxim Dounin http://nginx.org/ From mikydevel at yahoo.fr Thu Aug 18 20:41:38 2016 From: mikydevel at yahoo.fr (Mik J) Date: Thu, 18 Aug 2016 20:41:38 +0000 (UTC) Subject: Problem with SSL handshake In-Reply-To: <20160817231234.GA24741@mdounin.ru> References: <1712616749.26162304.1471435524380.JavaMail.yahoo.ref@mail.yahoo.com> <1712616749.26162304.1471435524380.JavaMail.yahoo@mail.yahoo.com> <20160817231234.GA24741@mdounin.ru> Message-ID: <1159917757.27526121.1471552898379.JavaMail.yahoo@mail.yahoo.com> Thank you Maxim for your answer. You are right I should start by upgrading to a more recent version. This machine is a debian machine and pointed to its release source list. Next I'll do captures. I'll also correct my configuration. Poka Le Jeudi 18 ao?t 2016 1h12, Maxim Dounin a ?crit : Hello! On Wed, Aug 17, 2016 at 12:05:24PM +0000, Mik J wrote: > nginx version: 1.6.2 > Hello, > The client and Nginx server seem to have problem to establish a SSL connection. In the logs I have this[crit] 18386#0: *1 SSL_do_handshake() failed (SSL: error:14094456:SSL routines:SSL3_READ_BYTES:tlsv1 unsupported extension:SSL alert number 110) whle SSL handshaking, client: @IP_client, server: 0.0.0.0:443I have searched this message on google but couldn't see anything that would help > My vhost configurationserver { > ??????? listen 80; > ??????? listen 443 ssl;??????? server_name www.example.org; > ...?????? ssl? on; Note: such a configuration is invalid and will try to negotiate SSL on the port 80.? You should remove "ssl on", just "listen ... ssl" on appropriate sockets is enough.? See http://nginx.org/en/docs/http/configuring_https_servers.html for details. > ?????? ssl_certificate???????? /etc/ssl/certs/cert.crt; > ?????? ssl_certificate_key???? /etc/ssl/private/key.key;??????? ssl_session_cache????? shared:SSL:10m;} > Do you know what could be wrong and where should I dig to solve this problem. The message suggests that the client aborted the connection.? The reason claimed is defined as follows, https://tools.ietf.org/html/rfc5246#section-7.2.2: ? unsupported_extension ? ? ? sent by clients that receive an extended server hello containing ? ? ? an extension that they did not put in the corresponding client ? ? ? hello.? This message is always fatal. You may try looking at the handshake using Wireshark to see if it's indeed what happens.? You may also try looking for additional information on the client side. Quick search suggests such errors previously appeared due to bugs in OpenSSL beta versions, see, e.g., here: http://openssl.6102.n7.nabble.com/1-0-1beta1-incompatibility-with-gnutls-td8366.html If you are using some attic version of OpenSSL (much like the version of nginx you are using), it may be a good idea to check if an upgrade fixes things. This also can be a bug in the client.? In this case, probably disabling TLS via ssl_protocols is the only option if you want to support the client, though it's not a solution to be used nowadays. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjlp at sina.com Fri Aug 19 01:25:46 2016 From: tjlp at sina.com (tjlp at sina.com) Date: Fri, 19 Aug 2016 09:25:46 +0800 Subject: Can nginx has the capability to dynamically limit the connections when the upstream server has full load Message-ID: <20160819012546.DE38FAC009C@webmail.sinamail.sina.com.cn> Hi, I have a scenario: my backend servers provide URL to query whether this sever can accept new connections (the http response body is "true" or "false"). This server also has configuration for the max allowed session. Now I want to use Nginx as load balancer. Does Nginx provide such kind of load balancing configuration: when a backend server can't accept new connection, the new session won't be created for this full server? Thanks Liu Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Aug 19 07:10:14 2016 From: nginx-forum at forum.nginx.org (khav) Date: Fri, 19 Aug 2016 03:10:14 -0400 Subject: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: <6a3cb2dfbf6ad87a7aa3326b3f5df9cd.NginxMailingListEnglish@forum.nginx.org> References: <6a3cb2dfbf6ad87a7aa3326b3f5df9cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Here is a simplified version of the config.I get 405 (Method not allowed).Documentation for the module say that the error happen if the request method is not POST http://www.grid.net.ru/nginx/upload.en.html server { listen 443 http2; location = /upload { proxy_pass http://mywebsite.com/upload; } } server { listen 80; server_name mywebsite.com; location = /upload { #module settings goes here upload_pass @uploadhandler; } location @uploadhandler { root /var/www/mywebsite.com/public_html/www; rewrite ^ /upload.php last; } } Thanks for the help Lukas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268988,269048#msg-269048 From jsharan15 at gmail.com Fri Aug 19 11:36:41 2016 From: jsharan15 at gmail.com (Sharan J) Date: Fri, 19 Aug 2016 17:06:41 +0530 Subject: Slow read attack in HTTP/2 Message-ID: Hi, Would like to know what timeouts should be configured to mitigate slow read attack in HTTP/2. Referred -> https://trac.nginx.org/nginx/changeset/4ba91a4c66a3010e50b84fc73f05e84619396885/nginx?_ga=1.129092111.226709851.1453970886 Could not understand what you have done when all streams are stuck on exhausted connection or stream windows. Please can you explain me the same. Thanks, Sharan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Aug 19 11:58:37 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 19 Aug 2016 14:58:37 +0300 Subject: Slow read attack in HTTP/2 In-Reply-To: References: Message-ID: <5620176.LHRUYDNH9X@vbart-workstation> On Friday 19 August 2016 17:06:41 Sharan J wrote: > Hi, > > Would like to know what timeouts should be configured to mitigate slow read > attack in HTTP/2. > A quote from the commit: | Now almost all the request timeouts work like in HTTP/1.x connections, so | the "client_header_timeout", "client_body_timeout", and "send_timeout" are | respected. These timeouts close the request. and the documentation links: http://nginx.org/r/client_header_timeout http://nginx.org/r/client_body_timeout http://nginx.org/r/send_timeout > Referred -> > https://trac.nginx.org/nginx/changeset/4ba91a4c66a3010e50b84fc73f05e84619396885/nginx?_ga=1.129092111.226709851.1453970886 > > Could not understand what you have done when all streams are stuck on > exhausted connection or stream windows. Please can you explain me the same. [..] Each stream has its own timeout configured by the directives mentioned above. If there's no progress on a stream during one of these timeouts then the stream is closed. wbr, Valentin V. Bartenev From alarig at swordarmor.fr Fri Aug 19 12:12:28 2016 From: alarig at swordarmor.fr (Alarig Le Lay) Date: Fri, 19 Aug 2016 14:12:28 +0200 Subject: =?UTF-8?Q?IPv6_only_resolver_doesn=E2=80=99t_work?= Message-ID: <2905c99e-94dd-5dd7-6cca-41c86d0fe1a0@swordarmor.fr> Hi, On my server, I don?t have a v4 resolver, juste an IPv6 one: bulbizarre ~ # cat /etc/resolv.conf # Generated by dhcpcd from eth0.dhcp, eth0.ra # /etc/resolv.conf.head can replace this line domain swordarmor.fr nameserver 2001:470:1f13:138::1 # /etc/resolv.conf.tail can replace this line I have some error messages about ?no resolver defined?: ==> /var/log/nginx/error_log <== 2016/08/19 14:00:03 [warn] 29733#29733: no resolver defined to resolve ocsp.startssl.com while requesting certificate status, responder: ocsp.startssl.com 2016/08/19 14:00:03 [error] 29733#29733: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com But, my resolver is perfecly working: bulbizarre ~ # dig -t A ocsp.startssl.com ; <<>> DiG 9.10.3-P4 <<>> -t A ocsp.startssl.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50744 ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;ocsp.startssl.com. IN A ;; ANSWER SECTION: ocsp.startssl.com. 600 IN CNAME ocsp.startssl.com.akamaized.net. ocsp.startssl.com.akamaized.net. 11236 IN CNAME a36.d.akamai.net. a36.d.akamai.net. 20 IN A 2.18.245.56 a36.d.akamai.net. 20 IN A 2.18.245.43 ;; Query time: 529 msec ;; SERVER: 2001:470:1f13:138::1#53(2001:470:1f13:138::1) ;; WHEN: Fri Aug 19 14:01:28 CEST 2016 ;; MSG SIZE rcvd: 150 -- alarig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From nginx-forum at forum.nginx.org Fri Aug 19 12:22:42 2016 From: nginx-forum at forum.nginx.org (khav) Date: Fri, 19 Aug 2016 08:22:42 -0400 Subject: AW: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: References: Message-ID: <3c2a91fea63e079643d09acdaf51cc46.NginxMailingListEnglish@forum.nginx.org> Here is a simplified version of the config.I get 405 (Method not allowed).Documentation for the module say that the error happen if the request method is not POST http://www.grid.net.ru/nginx/upload.en.html server { listen 443 http2; location = /upload { proxy_pass http://mywebsite.com/upload; } } server { listen 80; server_name mywebsite.com; location = /upload { #module settings goes here upload_pass @uploadhandler; } location @uploadhandler { root /var/www/mywebsite.com/public_html/www; rewrite ^ /upload.php last; } } Thanks for the help Lukas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268988,269054#msg-269054 From francis at daoine.org Fri Aug 19 12:24:31 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 19 Aug 2016 13:24:31 +0100 Subject: =?UTF-8?Q?Re=3A_IPv6_only_resolver_doesn=E2=80=99t_work?= In-Reply-To: <2905c99e-94dd-5dd7-6cca-41c86d0fe1a0@swordarmor.fr> References: <2905c99e-94dd-5dd7-6cca-41c86d0fe1a0@swordarmor.fr> Message-ID: <20160819122431.GP12280@daoine.org> On Fri, Aug 19, 2016 at 02:12:28PM +0200, Alarig Le Lay wrote: Hi there, > bulbizarre ~ # cat /etc/resolv.conf nginx doesn't use /etc/resolv.conf (directly, at any rate). > I have some error messages about ?no resolver defined?: http://nginx.org/r/resolver The answer is to tell nginx what resolver it should use. The *reason* is related to blocking function calls, I think. But the reason doesn't really matter, if all you want to do is have it working. > But, my resolver is perfecly working: nginx doesn't know about your resolver, only the one it was told to use. Cheers, f -- Francis Daly francis at daoine.org From vbart at nginx.com Fri Aug 19 12:30:02 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 19 Aug 2016 15:30:02 +0300 Subject: AW: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: <3c2a91fea63e079643d09acdaf51cc46.NginxMailingListEnglish@forum.nginx.org> References: <3c2a91fea63e079643d09acdaf51cc46.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2929017.3ZJ5NG8KWU@vbart-workstation> On Friday 19 August 2016 08:22:42 khav wrote: [..] > server { > listen 80; > server_name mywebsite.com; > location = /upload { > #module settings goes here > upload_pass @uploadhandler; > } > > location @uploadhandler { > root /var/www/mywebsite.com/public_html/www; > rewrite ^ /upload.php last; > } > > } [..] There's no location to handle your "/upload.php". wbr, Valentin V. Bartenev From jsharan15 at gmail.com Fri Aug 19 12:37:46 2016 From: jsharan15 at gmail.com (Sharan J) Date: Fri, 19 Aug 2016 18:07:46 +0530 Subject: Slow read attack in HTTP/2 In-Reply-To: <5620176.LHRUYDNH9X@vbart-workstation> References: <5620176.LHRUYDNH9X@vbart-workstation> Message-ID: Hi, Thanks for the response. Would like to know what happens in the following scenario, Client sets its initial congestion window size to be very small and requests for a large data. It updates the window size everytime when it gets exhausted with a small increment (so send_timeout wont happen as writes happens always but in a very small amount). In this case won't the connection remain until the server flushes all the data to the client which has very less window size? If the client opens many such connections with many streams, each requesting for a very large data, then won't it cause DOS? Thanks, Sharan On Fri, Aug 19, 2016 at 5:28 PM, Valentin V. Bartenev wrote: > On Friday 19 August 2016 17:06:41 Sharan J wrote: > > Hi, > > > > Would like to know what timeouts should be configured to mitigate slow > read > > attack in HTTP/2. > > > > A quote from the commit: > > | Now almost all the request timeouts work like in HTTP/1.x connections, > so > | the "client_header_timeout", "client_body_timeout", and "send_timeout" > are > | respected. These timeouts close the request. > > and the documentation links: > > http://nginx.org/r/client_header_timeout > http://nginx.org/r/client_body_timeout > http://nginx.org/r/send_timeout > > > > Referred -> > > https://trac.nginx.org/nginx/changeset/4ba91a4c66a3010e50b84fc73f05e8 > 4619396885/nginx?_ga=1.129092111.226709851.1453970886 > > > > Could not understand what you have done when all streams are stuck on > > exhausted connection or stream windows. Please can you explain me the > same. > [..] > > Each stream has its own timeout configured by the directives mentioned > above. > If there's no progress on a stream during one of these timeouts then the > stream > is closed. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Aug 19 13:48:53 2016 From: nginx-forum at forum.nginx.org (khav) Date: Fri, 19 Aug 2016 09:48:53 -0400 Subject: AW: AW: AW: Disabling HTTP/2 for a specific location In-Reply-To: <2929017.3ZJ5NG8KWU@vbart-workstation> References: <2929017.3ZJ5NG8KWU@vbart-workstation> Message-ID: Ah that was stupid... i forgot to copy the php location from my other server block Posted at Nginx Forum: https://forum.nginx.org/read.php?2,268988,269058#msg-269058 From vbart at nginx.com Fri Aug 19 13:51:59 2016 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Fri, 19 Aug 2016 16:51:59 +0300 Subject: Slow read attack in HTTP/2 In-Reply-To: References: <5620176.LHRUYDNH9X@vbart-workstation> Message-ID: <1620511.tUvAu4URuR@vbart-workstation> On Friday 19 August 2016 18:07:46 Sharan J wrote: > Hi, > > Thanks for the response. > > Would like to know what happens in the following scenario, > > Client sets its initial congestion window size to be very small and > requests for a large data. It updates the window size everytime when it > gets exhausted with a small increment (so send_timeout wont happen as > writes happens always but in a very small amount). In this case won't the > connection remain until the server flushes all the data to the client which > has very less window size? The same is true with HTTP/1.x, there's no difference. > > If the client opens many such connections with many streams, each > requesting for a very large data, then won't it cause DOS? > You should configure other limits to prevent client from requesting unlimited amounts of resources at the same time. wbr, Valentin V. Bartenev From vbart at nginx.com Fri Aug 19 14:05:48 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 19 Aug 2016 17:05:48 +0300 Subject: Can nginx has the capability to dynamically limit the connections when the upstream server has full load In-Reply-To: <20160819012546.DE38FAC009C@webmail.sinamail.sina.com.cn> References: <20160819012546.DE38FAC009C@webmail.sinamail.sina.com.cn> Message-ID: <2108667.OCyipSIG7D@vbart-workstation> On Friday 19 August 2016 09:25:46 tjlp at sina.com wrote: > Hi, > > I have a scenario: my backend servers provide URL to query whether this sever can accept new connections (the http response body is "true" or "false"). This server also has configuration for the max allowed session. Now I want to use Nginx as load balancer. Does Nginx provide such kind of load balancing configuration: when a backend server can't accept new connection, the new session won't be created for this full server? > Just curious, why is it done this way? If your server doesn't want to accept new connections, then why it doesn't just reject them with some error code? The commercial version of nginx is able to query such kind of URL. See for details: http://nginx.org/r/health_check wbr, Valentin V. Bartenev From r1ch+nginx at teamliquid.net Fri Aug 19 19:21:47 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 19 Aug 2016 21:21:47 +0200 Subject: No HTTPS on nginx.org by default Message-ID: Hello, I noticed that the PGP key used for signing the Debian release packages recently expired. I went to download the new one and noticed that nginx.org wasn't using HTTPS by default. Manually entering a https URL works as expected, although some pages have hard coded http links in them. Is there a reason that the website isn't using HTTPS and STS / HPKP? It would help mitigate potential MITM attacks especially on precompiled binaries and PGP key downloads. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri Aug 19 19:54:22 2016 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 19 Aug 2016 12:54:22 -0700 Subject: upstream status Message-ID: Hi, I am using nginx as proxy with two upstream servers. In the access log, I log the upstream_address, upstream_status, status (downstream), a special response header from upstream, etc. A few times I see in the log upstream_address: server1:port, server2:port with upstream_status: 504, 502 status: 502 special header: - Since there is the special response header, I assume nginx didn't get any responses from either upstream servers. Why would upstream_status shows 504 and 502? Those code must not come from upstream, is it generated by nginx self? I thought it would be -, - since there are no upstream_status returned from upstream. Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjlp at sina.com Sat Aug 20 08:17:28 2016 From: tjlp at sina.com (tjlp at sina.com) Date: Sat, 20 Aug 2016 16:17:28 +0800 Subject: =?UTF-8?Q?=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_Can_nginx_has_the_capability_to?= =?UTF-8?Q?_dynamically_limit_the_connections_when_the_upstream_server_has?= =?UTF-8?Q?_full_load?= Message-ID: <20160820081728.E175B10200D0@webmail.sinamail.sina.com.cn> Hi, Bartenev, Our backend server is an old product existing for more than 20 years. When a client login in to the backend server, a session is created. The session will be terminated when the client log out. To prevent the server from out of memory issue, we can configure the max allowed session number for one backend server instance. So, in the backend server instance, when the max session number is reached, no more clients can login, however, it can still serve for the client which already logined in. So, this is different from the health check. My understanding is that full load is not equal to unhealthy. As far as I know, F5 hardware seems support such kind of requirement. Thanks Liu Peng ----- ???? ----- ????"Valentin V. Bartenev" ????nginx at nginx.org ???Re: Can nginx has the capability to dynamically limit the connections when the upstream server has full load ???2016?08?19? 22?06? On Friday 19 August 2016 09:25:46 tjlp at sina.com wrote: > Hi, > > I have a scenario: my backend servers provide URL to query whether this sever can accept new connections (the http response body is "true" or "false"). This server also has configuration for the max allowed session. Now I want to use Nginx as load balancer. Does Nginx provide such kind of load balancing configuration: when a backend server can't accept new connection, the new session won't be created for this full server? > Just curious, why is it done this way? If your server doesn't want to accept new connections, then why it doesn't just reject them with some error code? The commercial version of nginx is able to query such kind of URL. See for details: http://nginx.org/r/health_check wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.aboulfeth at genious.Net Sun Aug 21 10:53:01 2016 From: h.aboulfeth at genious.Net (Hamza Aboulfeth) Date: Sun, 21 Aug 2016 11:53:01 +0100 Subject: Weird problem with redirects In-Reply-To: <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> References: <57895C6F.6080707@genious.Net> <20160716074719.GR12280@daoine.org> <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> Message-ID: <57B9880D.8030705@genious.Net> Hello everyone, I finally understand what's going on here... http://www.trendmicro.com/vinfo/us/threat-encyclopedia/vulnerability/10236/python-http-proxy-header-injection-vulnerability-cve20161000110 I have been a victim of this attack, nginx is also affected, is there any patch for this new vulnerability? Thank you, Hamza > Hamza Aboulfeth > August 13, 2016 at 6:36 PM > Hello, > > We have formatted the server and installed everything over again, a > week later the same problem occurred. All redirects are actually sent > from time to time to another host: > > [root at genious106 ~]# curl -IL -H "host: hespress.com" xx.xx.xx.xx > HTTP/1.1 301 Moved Permanently > Server: nginx/1.10.1 > Date: Sat, 13 Aug 2016 13:31:28 GMT > Content-Type: text/html > Content-Length: 185 > Connection: keep-alive > Location: http://1755118211 > .com/ > dbg-redirect: nginx > > HTTP/1.1 302 Found > Server: nginx/1.2.1 > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=iso-8859-1 > Connection: keep-alive > Set-Cookie: > orgje=2PUrADQAAgABACUhr1f__yUhr1dAAAEAAAAlIa9XMgACAAEAJSGvV___JSGvVwA-; expires=Sun, > 13-Aug-2017 13:31:17 GMT; path=/; domain=traffsell.com > Location: http://triuch.com/6lo1I > > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=utf-8 > Connection: keep-alive > Vary: Accept-Encoding > Vary: Accept-Encoding > > [root at genious106 ~]# > > Even php redirect requests are rerouted. > > Please advice, > Hamza > > Francis Daly > July 16, 2016 at 8:47 AM > On Fri, Jul 15, 2016 at 10:58:07PM +0100, Hamza Aboulfeth wrote: > > Hi there, > > > If that x.x.x.x is enough to make sure that this request gets to your > nginx, then your nginx config is probably involved. > > If this only started yesterday, then changes since yesterday (or since > your nginx was last restarted before yesterday) are probably most > interesting. > > And as a very long shot: if you can "tcpdump" to see that nginx is sending > one thing, but the client is receiving something else, then you'll want > to look outside nginx at something else interfering with the traffic. > > Good luck with it, > > f > Hamza Aboulfeth > July 15, 2016 at 10:58 PM > Hello, > > I have a weird problem that suddenly appeared on a client's website > yesterday. We have a redirection from non www to www and sometimes the > redirection sends somewhere else: > > [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x > HTTP/1.1 301 Moved Permanently > Server: nginx/1.11.2 > Date: Fri, 15 Jul 2016 21:54:06 GMT > Content-Type: text/html > Content-Length: 185 > Connection: keep-alive > Location: http://1755118213 > .com/ > dbg-redirect: nginx > > HTTP/1.1 302 Found > Server: nginx/1.2.1 > Date: Fri, 15 Jul 2016 21:52:37 GMT > Content-Type: text/html; charset=iso-8859-1 > Connection: keep-alive > Set-Cookie: orgje=JbgbADQAAgABACVbiVf__yVbiVdAAAEAAAAlW4lXAA--; > expires=Sat, 15-Jul-2017 21:52:37 GMT; path=/; domain=traffsell.com > Location: http://m.xxx.com/ > > HTTP/1.1 200 OK > Date: Fri, 15 Jul 2016 21:52:37 GMT > Content-Type: text/html; charset=UTF-8 > Connection: keep-alive > Set-Cookie: __cfduid=d5624eb7a789e21f082873681ec36a41b1468619557; > expires=Sat, 15-Jul-17 21:52:37 GMT; path=/; domain=.hibapress.com; > HttpOnly > X-Powered-By: PHP/5.3.27 > X-LiteSpeed-Cache: hit > Vary: Accept-Encoding > X-Turbo-Charged-By: LiteSpeed > Server: cloudflare-nginx > CF-RAY: 2c307148667c3f77-YUL > > Sometimes it acts as it should sometimes it redirect somewhere else > > If you have any clue about what's happening, do help me :) > > Thank you, > Hamza > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Sun Aug 21 11:32:37 2016 From: lagged at gmail.com (Andrei) Date: Sun, 21 Aug 2016 14:32:37 +0300 Subject: Weird problem with redirects In-Reply-To: <57B9880D.8030705@genious.Net> References: <57895C6F.6080707@genious.Net> <20160716074719.GR12280@daoine.org> <13F76B0F-8FD1-4BF1-8B9A-0F97292DE76F@genious.net> <57B9880D.8030705@genious.Net> Message-ID: Have you read over https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/? On Sun, Aug 21, 2016 at 1:53 PM, Hamza Aboulfeth wrote: > Hello everyone, > > I finally understand what's going on here... > > http://www.trendmicro.com/vinfo/us/threat-encyclopedia/ > vulnerability/10236/python-http-proxy-header-injection- > vulnerability-cve20161000110 > > I have been a victim of this attack, nginx is also affected, is there any > patch for this new vulnerability? > > Thank you, > Hamza > > > Hamza Aboulfeth > August 13, 2016 at 6:36 PM > Hello, > > We have formatted the server and installed everything over again, a week > later the same problem occurred. All redirects are actually sent from time > to time to another host: > > [root at genious106 ~]# curl -IL -H "host: hespress.com" xx.xx.xx.xx > HTTP/1.1 301 Moved Permanently > Server: nginx/1.10.1 > Date: Sat, 13 Aug 2016 13:31:28 GMT > Content-Type: text/html > Content-Length: 185 > Connection: keep-alive > Location: http://1755118211 > .com/ > dbg-redirect: nginx > > HTTP/1.1 302 Found > Server: nginx/1.2.1 > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=iso-8859-1 > Connection: keep-alive > Set-Cookie: orgje=2PUrADQAAgABACUhr1f__yUhr1dAAAEAAAAlIa9XMgACAAEAJSGvV___JSGvVwA-; > expires=Sun, 13-Aug-2017 13:31:17 GMT; path=/; domain=traffsell.com > Location: http://triuch.com/6lo1I > > HTTP/1.1 200 OK > Server: nginx > Date: Sat, 13 Aug 2016 13:31:17 GMT > Content-Type: text/html; charset=utf-8 > Connection: keep-alive > Vary: Accept-Encoding > Vary: Accept-Encoding > > [root at genious106 ~]# > > Even php redirect requests are rerouted. > > Please advice, > Hamza > > Francis Daly > July 16, 2016 at 8:47 AM > On Fri, Jul 15, 2016 at 10:58:07PM +0100, Hamza Aboulfeth wrote: > > Hi there, > > > If that x.x.x.x is enough to make sure that this request gets to your > nginx, then your nginx config is probably involved. > > If this only started yesterday, then changes since yesterday (or since > your nginx was last restarted before yesterday) are probably most > interesting. > > And as a very long shot: if you can "tcpdump" to see that nginx is sending > one thing, but the client is receiving something else, then you'll want > to look outside nginx at something else interfering with the traffic. > > Good luck with it, > > f > Hamza Aboulfeth > July 15, 2016 at 10:58 PM > Hello, > > I have a weird problem that suddenly appeared on a client's website > yesterday. We have a redirection from non www to www and sometimes the > redirection sends somewhere else: > > [root at genious33 nginx-1.11.2]# curl -IL -H "host: hespress.com" x.x.x.x > HTTP/1.1 301 Moved Permanently > Server: nginx/1.11.2 > Date: Fri, 15 Jul 2016 21:54:06 GMT > Content-Type: text/html > Content-Length: 185 > Connection: keep-alive > Location: http://1755118213 > .com/ > dbg-redirect: nginx > > HTTP/1.1 302 Found > Server: nginx/1.2.1 > Date: Fri, 15 Jul 2016 21:52:37 GMT > Content-Type: text/html; charset=iso-8859-1 > Connection: keep-alive > Set-Cookie: orgje=JbgbADQAAgABACVbiVf__yVbiVdAAAEAAAAlW4lXAA--; > expires=Sat, 15-Jul-2017 21:52:37 GMT; path=/; domain=traffsell.com > Location: http://m.xxx.com/ > > HTTP/1.1 200 OK > Date: Fri, 15 Jul 2016 21:52:37 GMT > Content-Type: text/html; charset=UTF-8 > Connection: keep-alive > Set-Cookie: __cfduid=d5624eb7a789e21f082873681ec36a41b1468619557; > expires=Sat, 15-Jul-17 21:52:37 GMT; path=/; domain=.hibapress.com; > HttpOnly > X-Powered-By: PHP/5.3.27 > X-LiteSpeed-Cache: hit > Vary: Accept-Encoding > X-Turbo-Charged-By: LiteSpeed > Server: cloudflare-nginx > CF-RAY: 2c307148667c3f77-YUL > > Sometimes it acts as it should sometimes it redirect somewhere else > > If you have any clue about what's happening, do help me :) > > Thank you, > Hamza > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Aug 21 13:56:09 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 21 Aug 2016 15:56:09 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: References: Message-ID: It is surprising, since I remember Ilya Grigorik made a talk about TLS during the first ever nginx conf in 2014: https://www.youtube.com/watch?v=iHxD-G0YjiU https://istlsfastyet.com/ Thus, there is no reason for not going full-HTTPS in delivering Web pages. --- *B. R.* On Fri, Aug 19, 2016 at 9:21 PM, Richard Stanway wrote: > Hello, > I noticed that the PGP key used for signing the Debian release packages > recently expired. I went to download the new one and noticed that > nginx.org wasn't using HTTPS by default. Manually entering a https URL > works as expected, although some pages have hard coded http links in them. > > Is there a reason that the website isn't using HTTPS and STS / HPKP? It > would help mitigate potential MITM attacks especially on precompiled > binaries and PGP key downloads. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Aug 21 14:08:04 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 21 Aug 2016 16:08:04 +0200 Subject: upstream status In-Reply-To: References: Message-ID: As per the docs , it is said this variable contains all the status codes returned by each upstream interrogated. >From what I understood, server1 returned 504, server2 returned 502. Those statuses are included in what the proxy_next_upstream directive describes as an 'unsuccessful attempt', thus it seems every server of your upstream group has been tried (default number of attempts is 1 as per the server directive). nginx then returned the result of the last attempt, resulting in a 502 answer passed back to the client. I do not know what a 'special response' is. Custom header? I also do not get why you are surprised by the fact 5xx responses end up with them being shown to client. They are perfectly valid HTTP status codes, indicating a server error, and are shown because no upstream server returned a 'successful' status code. --- *B. R.* On Fri, Aug 19, 2016 at 9:54 PM, Frank Liu wrote: > Hi, > > I am using nginx as proxy with two upstream servers. In the access log, I > log the upstream_address, upstream_status, status (downstream), a special > response header from upstream, etc. > > A few times I see in the log upstream_address: server1:port, server2:port > with upstream_status: 504, 502 status: 502 special header: - > Since there is the special response header, I assume nginx didn't get any > responses from either upstream servers. Why would upstream_status shows 504 > and 502? Those code must not come from upstream, is it generated by nginx > self? I thought it would be -, - since there are no upstream_status > returned from upstream. > > Thanks! > Frank > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Aug 21 16:45:13 2016 From: nginx-forum at forum.nginx.org (HuMaN-BiEnG) Date: Sun, 21 Aug 2016 12:45:13 -0400 Subject: open cart control panel keeps redirecting asking for password Message-ID: <991498a4f852126684ef6729793ffd47.NginxMailingListEnglish@forum.nginx.org> hello there i have nginx newly installed as reverse proxy infront of apache but i found a strange problem when i try to login to open cart control panel it keeps redirecting me to control panel without enabling me to login the authentication informations that i used are correct & after i have disabled nginx it works without problem here are the contents of configuration files that i uses # nginx.conf contents user nginx; worker_processes 2; worker_rlimit_nofile 4000; thread_pool default threads=32 max_queue=65536; pid /usr/local/nginx/logs/nginx.pid; events { use epoll; worker_connections 4000; multi_accept On; accept_mutex off; } http { aio threads=default; access_log /dev/null; error_log /dev/null; server_tokens Off; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; client_body_temp_path /tmp; proxy_cache_path /home/nginx_cache levels=1:2 keys_zone=nginx-cache:1m max_size=4g inactive=6h; open_file_cache max=3000 inactive=6h; open_file_cache_valid 6h; open_file_cache_min_uses 3; open_file_cache_errors Off; proxy_temp_path /home/nginx_cache/tmp; server_names_hash_max_size 512; server_names_hash_bucket_size 512; server_name_in_redirect On; port_in_redirect Off; tcp_nodelay On; tcp_nopush On; sendfile on; sendfile_max_chunk 512k; keepalive_timeout 5; keepalive_requests 80; reset_timedout_connection On; if_modified_since before; gzip On; gzip_static Off; gzip_buffers 16 8k; gzip_comp_level 6; gzip_disable "msie6"; gzip_min_length 1500; gzip_proxied any; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-javascript application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/javascript text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy text/x-js text/xml; gzip_vary on; server { server_name xxxxx; root /usr/local/nginx/html; listen xx.xx.xx.xx:80; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://xx.xx.xx.xx:8080; } } include /usr/local/nginx/conf/vhost.conf; } # proxy.conf contents proxy_buffering On; proxy_cache_valid 404 3h; proxy_cache_valid 500 502 504 406 3h; proxy_cache_valid 200 6h; proxy_buffers 32 4m; proxy_busy_buffers_size 25m; proxy_buffer_size 512k; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache nginx-cache; proxy_cache_key "$host$request_uri"; proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie"; proxy_hide_header "Set-Cookie"; proxy_cache_min_uses 3; proxy_max_temp_file_size 0; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 50m; client_body_buffer_size 4m; proxy_connect_timeout 300s; proxy_read_timeout 300s; proxy_send_timeout 300s; proxy_ignore_client_abort off; proxy_intercept_errors off; proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; proxy_cache_bypass $http_pragma $http_authorization; i suspecious set-cookies directives please could anyone help me Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269073,269073#msg-269073 From max at mxcrypt.com Sun Aug 21 17:43:43 2016 From: max at mxcrypt.com (Maxim Khitrov) Date: Sun, 21 Aug 2016 13:43:43 -0400 Subject: limit_except ignored Message-ID: Hi, I'm running nginx v1.9.10 on OpenBSD with the following server definition: server { listen 80; server_name example.com; location / { deny all; limit_except POST { allow all; proxy_pass http://10.1.2.3; } proxy_set_header Host $host; } } To my surprise, all GET requests are allowed and are passed to the backend server. Is this a bug or am I doing something stupid? In the final configuration I want to only allow GET requests, but I'm limiting to POST for now to simplify testing. -Max From max at mxcrypt.com Sun Aug 21 18:27:34 2016 From: max at mxcrypt.com (Maxim Khitrov) Date: Sun, 21 Aug 2016 14:27:34 -0400 Subject: limit_except ignored In-Reply-To: References: Message-ID: On Sun, Aug 21, 2016 at 1:43 PM, Maxim Khitrov wrote: > Hi, > > I'm running nginx v1.9.10 on OpenBSD with the following server definition: > > server { > listen 80; > server_name example.com; > location / { > deny all; > limit_except POST { > allow all; > proxy_pass http://10.1.2.3; > } > proxy_set_header Host $host; > } > } > > To my surprise, all GET requests are allowed and are passed to the > backend server. Is this a bug or am I doing something stupid? In the > final configuration I want to only allow GET requests, but I'm > limiting to POST for now to simplify testing. > > -Max I got it working by swapping 'allow' and 'deny' directives and moving proxy_pass out of limit_except. I was just confused by the documentation for limit_except. Sorry for the noise. From lists at lazygranch.com Mon Aug 22 02:02:04 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sun, 21 Aug 2016 19:02:04 -0700 Subject: Problems with custom log file format Message-ID: <20160821190204.18aba959@linux-h57q.site> Nginx 1.10.1,2 FreeBSD 10.2-RELEASE-p18 #0: Sat May 28 08:53:43 UTC 2016 I'm using the "map" module to detect obvious hacking by detecting keywords. (Yes, I know about Naxsi.) Finding the really dumb hacks is easy. I give them a 444 return code with the idea being I can run a script on the log file and block these IPs. (Yes, I know about swatch.) My problem is the access.log doesn't get formatted all the time. I have many examples, but this is representative. First group has 444 at the start of the line (custom format). The next group uses the default format. ---------------------------------- 444 111.91.62.144 - - [21/Aug/2016:09:31:50 +0000] "GET /wp-login.php HTTP/1.1" 0 "-" "Mozilla/5.0 (Windows NT 6.1; WO W64; rv:40.0) Gecko/20100101 Firefox/40.1" "-" 444 175.123.98.240 - - [21/Aug/2016:04:39:44 +0000] "GET /manager/html HTTP/1.1" 0 "-" "Mozilla/5.0 (Windows NT 5.1; r v:5.0) Gecko/20100101 Firefox/5.0" "-" 444 103.253.14.43 - - [21/Aug/2016:05:43:15 +0000] "GET /admin/config.php HTTP/1.1" 0 "-" "python-requests/2.10.0" "-" 444 185.130.6.49 - - [21/Aug/2016:14:23:09 +0000] "GET //phpMyAdmin/scripts/setup.php HTTP/1.1" 0 "-" "-" "-" 176.26.5.107 - - [21/Aug/2016:09:43:20 +0000] "GET /wp-login.php HTTP/1.1" 444 0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:40.0) Gecko/20100101 Firefox/40.1" 195.90.204.103 - - [21/Aug/2016:17:09:11 +0000] "GET /wordpress/wp-admin/ HTTP/1.1" 444 0 "-" "-" -------------------------- I'm putting the return code first to simplify my scripting that I will use to feed blocking in ipfw. My nginx.conf follows (abbreviated). The email may mangle the formatting a bit. ------------- http { log_format main '$status $remote_addr - $remote_user [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main --------------------------- From jsharan15 at gmail.com Mon Aug 22 07:10:46 2016 From: jsharan15 at gmail.com (Sharan J) Date: Mon, 22 Aug 2016 12:40:46 +0530 Subject: Slow read attack in HTTP/2 In-Reply-To: <1620511.tUvAu4URuR@vbart-workstation> References: <5620176.LHRUYDNH9X@vbart-workstation> <1620511.tUvAu4URuR@vbart-workstation> Message-ID: Hi, The scenario which I mentioned was only tested and reported by imperva and Nginx has said that they have solved this slow read issue. References: http://www.imperva.com/docs/Imperva_HII_HTTP2.pdf https://www.nginx.com/blog/the-imperva-http2-vulnerability-report-and-nginx/ But as you say, the problem still persists? We can prevent a single client from requesting large amount of resources but what if the attacker uses multiple machines to make an attack? Is there any check in nginx to examine the client's initial congestion window setting and close the connection if it says initial congestion window size to be less than 65,535 bytes (as mentioned in RFC as this to be the minimum initial congestion window size). P.S. Please correct me if I misunderstood. Thank you for your responses :) Thanks, Sharan On Fri, Aug 19, 2016 at 7:21 PM, ???????? ???????? wrote: > On Friday 19 August 2016 18:07:46 Sharan J wrote: > > Hi, > > > > Thanks for the response. > > > > Would like to know what happens in the following scenario, > > > > Client sets its initial congestion window size to be very small and > > requests for a large data. It updates the window size everytime when it > > gets exhausted with a small increment (so send_timeout wont happen as > > writes happens always but in a very small amount). In this case won't the > > connection remain until the server flushes all the data to the client > which > > has very less window size? > > The same is true with HTTP/1.x, there's no difference. > > > > > If the client opens many such connections with many streams, each > > requesting for a very large data, then won't it cause DOS? > > > > You should configure other limits to prevent client from requesting > unlimited amounts of resources at the same time. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Aug 22 10:12:08 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 22 Aug 2016 13:12:08 +0300 Subject: Slow read attack in HTTP/2 In-Reply-To: References: <1620511.tUvAu4URuR@vbart-workstation> Message-ID: <1494597.xivSizsKsP@vbart-workstation> On Monday 22 August 2016 12:40:46 Sharan J wrote: > Hi, > > The scenario which I mentioned was only tested and reported by imperva and > Nginx has said that they have solved this slow read issue. > References: > http://www.imperva.com/docs/Imperva_HII_HTTP2.pdf > https://www.nginx.com/blog/the-imperva-http2-vulnerability-report-and-nginx/ > [..] They spotted a connection leak, that was already known as ticket #626: https://trac.nginx.org/nginx/ticket/626 and it was solved in 1.9.12. Nothing more. > But as you say, the problem still persists? We can prevent a single client > from requesting large amount of resources but what if the attacker uses > multiple machines to make an attack? You can configure any limits using limit_req and limit_conn modules: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html and protect your server even if attacker uses multiple machines. > Is there any check in nginx to examine the client's initial congestion > window setting and close the connection if it says initial congestion > window size to be less than 65,535 bytes (as mentioned in RFC > as this to be the > minimum initial congestion window size). 65,535 bytes is just the default, not the minimum (and in fact, nginx may use even zero initial window). Such check is useless, since a client can exhaust any initial window and then stop sending or receiving. wbr, Valentin V. Bartenev From vbart at nginx.com Mon Aug 22 10:31:30 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 22 Aug 2016 13:31:30 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: Message-ID: <2626990.y0zAfJzpm4@vbart-workstation> On Sunday 21 August 2016 15:56:09 B.R. wrote: > It is surprising, since I remember Ilya Grigorik made a talk about TLS > during the first ever nginx conf in 2014: > https://www.youtube.com/watch?v=iHxD-G0YjiU > https://istlsfastyet.com/ It's just Ilya's opinion. You are free to agree or not. > > Thus, there is no reason for not going full-HTTPS in delivering Web pages. There are at least two reasons to not use HTTPS: 1. Provide easy access to information for people, who can't use encryption by political, legal, or technical reasons. 2. Don't waste resources on encryption, and thus save our planet. Please, don't be a TLS despot and let people to have a choice to use encryption or not. I think the situation when I can't download new version of OpenSSL using old version of OpenSSL is ridiculous, but they have configured openssl.org that way. How I supposed to use Internet then? wbr, Valentin V. Bartenev From r1ch+nginx at teamliquid.net Mon Aug 22 15:40:02 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 22 Aug 2016 17:40:02 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: <2626990.y0zAfJzpm4@vbart-workstation> References: <2626990.y0zAfJzpm4@vbart-workstation> Message-ID: 1. You could provide insecure.nginx.org mirror for such people, make nginx.org secure by default. 2. Modern server CPUs are already extremely energy efficient, TLS adds negligible load. See https://istlsfastyet.com/ On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev wrote: > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > It is surprising, since I remember Ilya Grigorik made a talk about TLS > > during the first ever nginx conf in 2014: > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > https://istlsfastyet.com/ > > It's just Ilya's opinion. You are free to agree or not. > > > > > > Thus, there is no reason for not going full-HTTPS in delivering Web > pages. > > There are at least two reasons to not use HTTPS: > > 1. Provide easy access to information for people, who can't use encryption > by political, legal, or technical reasons. > > 2. Don't waste resources on encryption, and thus save our planet. > > Please, don't be a TLS despot and let people to have a choice to use > encryption > or not. > > I think the situation when I can't download new version of OpenSSL using > old > version of OpenSSL is ridiculous, but they have configured openssl.org > that way. > How I supposed to use Internet then? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Aug 22 15:44:58 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 22 Aug 2016 18:44:58 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> Message-ID: On 8/22/16 6:40 PM, Richard Stanway wrote: > 1. You could provide insecure.nginx.org > mirror for such people, make nginx.org secure by > default. > No, thanks. It is secure by default and HTTPS by default doesn't add any value. > 2. Modern server CPUs are already extremely energy efficient, TLS > adds negligible load. See https://istlsfastyet.com/ > Sorry, failed to find any power consumption bechnmarks here. > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > wrote: > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > It is surprising, since I remember Ilya Grigorik made a talk about TLS > > during the first ever nginx conf in 2014: > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > https://istlsfastyet.com/ > > It's just Ilya's opinion. You are free to agree or not. > > > > > > Thus, there is no reason for not going full-HTTPS in delivering Web pages. > > There are at least two reasons to not use HTTPS: > > 1. Provide easy access to information for people, who can't use > encryption > by political, legal, or technical reasons. > > 2. Don't waste resources on encryption, and thus save our planet. > > Please, don't be a TLS despot and let people to have a choice to > use encryption > or not. > > I think the situation when I can't download new version of > OpenSSL using old > version of OpenSSL is ridiculous, but they have configured > openssl.org that way. > How I supposed to use Internet then? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From rainer at ultra-secure.de Mon Aug 22 15:58:47 2016 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Mon, 22 Aug 2016 17:58:47 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> Message-ID: <1de56fafd97ffddef89e62f622528032@ultra-secure.de> Am 2016-08-22 17:44, schrieb Maxim Konovalov: > On 8/22/16 6:40 PM, Richard Stanway wrote: >> 1. You could provide insecure.nginx.org >> mirror for such people, make nginx.org secure by >> default. >> > No, thanks. It is secure by default and HTTPS by default doesn't > add any value. > >> 2. Modern server CPUs are already extremely energy efficient, TLS >> adds negligible load. See https://istlsfastyet.com/ >> > Sorry, failed to find any power consumption bechnmarks here. Well, in theory, a nation-state or someone in a user's network-path could probably inject a trojaned binary/source-file (and also replace the content of the checksum-file etc.). But it's IMO not worth arguing about these things. Also, an asteroid could hit earth and everything could be over next week. nginx doesn't provide an auto-update mechanism that stupidly downloads and accepts all and everything somebody makes available under some spoofed address. From dewanggaba at xtremenitro.org Mon Aug 22 16:03:55 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 22 Aug 2016 23:03:55 +0700 Subject: No HTTPS on nginx.org by default In-Reply-To: <1de56fafd97ffddef89e62f622528032@ultra-secure.de> References: <2626990.y0zAfJzpm4@vbart-workstation> <1de56fafd97ffddef89e62f622528032@ultra-secure.de> Message-ID: Hello! On 08/22/2016 10:58 PM, rainer at ultra-secure.de wrote: > > nginx doesn't provide an auto-update mechanism that stupidly downloads > and accepts all and everything somebody makes available under some > spoofed address. You can use PGP key[1] to verified the binary was correct or "injected" or "spoofed". Anyway, nginx support auto-update mechanism using repositories. [2] [1] http://nginx.org/en/pgp_keys.html [2] http://nginx.org/en/linux_packages.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 859 bytes Desc: OpenPGP digital signature URL: From reallfqq-nginx at yahoo.fr Mon Aug 22 16:41:50 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 22 Aug 2016 18:41:50 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> Message-ID: The problem is, if the GPG key is served through HTTP, there is no way to authenticate it, since it could be compromised through MITM. I am very surprised to see myself being qualified as 'HTTPS despot' when I just spot the obvious. Compromised repository + GPG key is one very powerful way of impersonating another product. TLS provides both encryption and authentication, based on the initial shared circle of trust. Thus you certify the GPG key is authentic and thus, if it verifies the binaries, you ensure the delivered package are produced by the owner of the key, in the end the real author. In 2016, stating that content served over HTTP is 'secure' blows my mind and kills your credibility. ?Now, as Richard pointed out, if you truly believe you need to provide HTTP-only, you can. It would be better if it was in a very visible fashion, though?. Where was despotism, again? --- *B. R.* On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway wrote: > 1. You could provide insecure.nginx.org mirror for such people, make > nginx.org secure by default. > > 2. Modern server CPUs are already extremely energy efficient, TLS adds > negligible load. See https://istlsfastyet.com/ > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > wrote: > >> On Sunday 21 August 2016 15:56:09 B.R. wrote: >> > It is surprising, since I remember Ilya Grigorik made a talk about TLS >> > during the first ever nginx conf in 2014: >> > https://www.youtube.com/watch?v=iHxD-G0YjiU >> > https://istlsfastyet.com/ >> >> It's just Ilya's opinion. You are free to agree or not. >> >> >> > >> > Thus, there is no reason for not going full-HTTPS in delivering Web >> pages. >> >> There are at least two reasons to not use HTTPS: >> >> 1. Provide easy access to information for people, who can't use >> encryption >> by political, legal, or technical reasons. >> >> 2. Don't waste resources on encryption, and thus save our planet. >> >> Please, don't be a TLS despot and let people to have a choice to use >> encryption >> or not. >> >> I think the situation when I can't download new version of OpenSSL using >> old >> version of OpenSSL is ridiculous, but they have configured openssl.org >> that way. >> How I supposed to use Internet then? >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Aug 22 16:49:29 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 22 Aug 2016 19:49:29 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> Message-ID: <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> On 8/22/16 7:41 PM, B.R. wrote: > The problem is, if the GPG key is served through HTTP, there is no > way to authenticate it, since it could be compromised through MITM. > I am very surprised to see myself being qualified as 'HTTPS despot' > when I just spot the obvious. > But it does not -- our PGP key distributed through a number of channels, including HTTPS. Problem solved, case closed? > Compromised repository + GPG key is one very powerful way of > impersonating another product. > > TLS provides both encryption and authentication, based on the > initial shared circle of trust. > Thus you certify the GPG key is authentic and thus, if it verifies > the binaries, you ensure the delivered package are produced by the > owner of the key, in the end the real author. > > In 2016, stating that content served over HTTP is 'secure' blows my > mind and kills your credibility. > Who did that? What's his name? > ?Now, as Richard pointed out, if you truly believe you need to > provide HTTP-only, you can. It would be better if it was in a very > visible fashion, though?. > Where was despotism, again? nginx.org already has HTTPS therefore it is not HTTP-only. > --- > *B. R.* > > On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway > > wrote: > > 1. You could provide insecure.nginx.org > mirror for such people, make > nginx.org secure by default. > > 2. Modern server CPUs are already extremely energy efficient, > TLS adds negligible load. See https://istlsfastyet.com/ > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > wrote: > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > It is surprising, since I remember Ilya Grigorik made a talk about TLS > > during the first ever nginx conf in 2014: > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > https://istlsfastyet.com/ > > It's just Ilya's opinion. You are free to agree or not. > > > > > > Thus, there is no reason for not going full-HTTPS in delivering Web pages. > > There are at least two reasons to not use HTTPS: > > 1. Provide easy access to information for people, who can't > use encryption > by political, legal, or technical reasons. > > 2. Don't waste resources on encryption, and thus save our > planet. > > Please, don't be a TLS despot and let people to have a > choice to use encryption > or not. > > I think the situation when I can't download new version of > OpenSSL using old > version of OpenSSL is ridiculous, but they have configured > openssl.org that way. > How I supposed to use Internet then? > > wbr, Valentin V. Bartenev > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From r1ch+nginx at teamliquid.net Mon Aug 22 17:15:21 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 22 Aug 2016 19:15:21 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> Message-ID: Could you at least fix the https download page, so it doesn't directly link to a HTTP PGP key? On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov wrote: > On 8/22/16 7:41 PM, B.R. wrote: > > The problem is, if the GPG key is served through HTTP, there is no > > way to authenticate it, since it could be compromised through MITM. > > I am very surprised to see myself being qualified as 'HTTPS despot' > > when I just spot the obvious. > > > But it does not -- our PGP key distributed through a number of > channels, including HTTPS. Problem solved, case closed? > > > Compromised repository + GPG key is one very powerful way of > > impersonating another product. > > > > TLS provides both encryption and authentication, based on the > > initial shared circle of trust. > > Thus you certify the GPG key is authentic and thus, if it verifies > > the binaries, you ensure the delivered package are produced by the > > owner of the key, in the end the real author. > > > > In 2016, stating that content served over HTTP is 'secure' blows my > > mind and kills your credibility. > > > Who did that? What's his name? > > > ?Now, as Richard pointed out, if you truly believe you need to > > provide HTTP-only, you can. It would be better if it was in a very > > visible fashion, though?. > > Where was despotism, again? > > nginx.org already has HTTPS therefore it is not HTTP-only. > > > --- > > *B. R.* > > > > On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway > > > wrote: > > > > 1. You could provide insecure.nginx.org > > mirror for such people, make > > nginx.org secure by default. > > > > 2. Modern server CPUs are already extremely energy efficient, > > TLS adds negligible load. See https://istlsfastyet.com/ > > > > > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > > wrote: > > > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > > It is surprising, since I remember Ilya Grigorik made a talk > about TLS > > > during the first ever nginx conf in 2014: > > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > > > https://istlsfastyet.com/ > > > > It's just Ilya's opinion. You are free to agree or not. > > > > > > > > > > Thus, there is no reason for not going full-HTTPS in > delivering Web pages. > > > > There are at least two reasons to not use HTTPS: > > > > 1. Provide easy access to information for people, who can't > > use encryption > > by political, legal, or technical reasons. > > > > 2. Don't waste resources on encryption, and thus save our > > planet. > > > > Please, don't be a TLS despot and let people to have a > > choice to use encryption > > or not. > > > > I think the situation when I can't download new version of > > OpenSSL using old > > version of OpenSSL is ridiculous, but they have configured > > openssl.org that way. > > How I supposed to use Internet then? > > > > wbr, Valentin V. Bartenev > > > > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Aug 22 17:19:55 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 22 Aug 2016 20:19:55 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> Message-ID: <09744042-0580-1931-d315-4c805b3e0516@nginx.com> On 8/22/16 8:15 PM, Richard Stanway wrote: > Could you at least fix the https download page, so it doesn't > directly link to a HTTP PGP key? > It works correctly: https://nginx.org/en/download.html > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov > wrote: > > On 8/22/16 7:41 PM, B.R. wrote: > > The problem is, if the GPG key is served through HTTP, there is no > > way to authenticate it, since it could be compromised through > MITM. > > I am very surprised to see myself being qualified as 'HTTPS > despot' > > when I just spot the obvious. > > > But it does not -- our PGP key distributed through a number of > channels, including HTTPS. Problem solved, case closed? > > > Compromised repository + GPG key is one very powerful way of > > impersonating another product. > > > > TLS provides both encryption and authentication, based on the > > initial shared circle of trust. > > Thus you certify the GPG key is authentic and thus, if it verifies > > the binaries, you ensure the delivered package are produced by the > > owner of the key, in the end the real author. > > > > In 2016, stating that content served over HTTP is 'secure' > blows my > > mind and kills your credibility. > > > Who did that? What's his name? > > > ?Now, as Richard pointed out, if you truly believe you need to > > provide HTTP-only, you can. It would be better if it was in a very > > visible fashion, though?. > > Where was despotism, again? > > nginx.org already has HTTPS therefore it is > not HTTP-only. > > > --- > > *B. R.* > > > > On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway > > > >> wrote: > > > > 1. You could provide insecure.nginx.org > > mirror for such people, make > > nginx.org secure by > default. > > > > 2. Modern server CPUs are already extremely energy efficient, > > TLS adds negligible load. See https://istlsfastyet.com/ > > > > > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > >> wrote: > > > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > > It is surprising, since I remember Ilya Grigorik made a talk about TLS > > > during the first ever nginx conf in 2014: > > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > > > > > https://istlsfastyet.com/ > > > > It's just Ilya's opinion. You are free to agree or not. > > > > > > > > > > Thus, there is no reason for not going full-HTTPS in delivering Web pages. > > > > There are at least two reasons to not use HTTPS: > > > > 1. Provide easy access to information for people, who can't > > use encryption > > by political, legal, or technical reasons. > > > > 2. Don't waste resources on encryption, and thus save our > > planet. > > > > Please, don't be a TLS despot and let people to have a > > choice to use encryption > > or not. > > > > I think the situation when I can't download new version of > > OpenSSL using old > > version of OpenSSL is ridiculous, but they have configured > > openssl.org > that way. > > How I supposed to use Internet then? > > > > wbr, Valentin V. Bartenev > > > > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: > http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From r1ch+nginx at teamliquid.net Mon Aug 22 17:23:41 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 22 Aug 2016 19:23:41 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: <09744042-0580-1931-d315-4c805b3e0516@nginx.com> References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: See https://nginx.org/en/linux_packages.html#stable PGP key links are hard coded to http URLs:

For Debian/Ubuntu, in order to authenticate the nginx repository signature and to eliminate warnings about missing PGP key during installation of the nginx package, it is necessary to add the key used to sign the nginx packages and repository to the apt program keyring. Please download this key from our web site, and add it to the apt program keyring with the following command:

On Mon, Aug 22, 2016 at 7:19 PM, Maxim Konovalov wrote: > On 8/22/16 8:15 PM, Richard Stanway wrote: > > Could you at least fix the https download page, so it doesn't > > directly link to a HTTP PGP key? > > > It works correctly: https://nginx.org/en/download.html > > > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov > > wrote: > > > > On 8/22/16 7:41 PM, B.R. wrote: > > > The problem is, if the GPG key is served through HTTP, there is no > > > way to authenticate it, since it could be compromised through > > MITM. > > > I am very surprised to see myself being qualified as 'HTTPS > > despot' > > > when I just spot the obvious. > > > > > But it does not -- our PGP key distributed through a number of > > channels, including HTTPS. Problem solved, case closed? > > > > > Compromised repository + GPG key is one very powerful way of > > > impersonating another product. > > > > > > TLS provides both encryption and authentication, based on the > > > initial shared circle of trust. > > > Thus you certify the GPG key is authentic and thus, if it verifies > > > the binaries, you ensure the delivered package are produced by the > > > owner of the key, in the end the real author. > > > > > > In 2016, stating that content served over HTTP is 'secure' > > blows my > > > mind and kills your credibility. > > > > > Who did that? What's his name? > > > > > ?Now, as Richard pointed out, if you truly believe you need to > > > provide HTTP-only, you can. It would be better if it was in a very > > > visible fashion, though?. > > > Where was despotism, again? > > > > nginx.org already has HTTPS therefore it is > > not HTTP-only. > > > > > --- > > > *B. R.* > > > > > > On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway > > > > > > >> wrote: > > > > > > 1. You could provide insecure.nginx.org < > http://insecure.nginx.org> > > > mirror for such people, make > > > nginx.org secure by > > default. > > > > > > 2. Modern server CPUs are already extremely energy efficient, > > > TLS adds negligible load. See https://istlsfastyet.com/ > > > > > > > > > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > > vbart at nginx.com > > >> wrote: > > > > > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > > > It is surprising, since I remember Ilya Grigorik made a > talk about TLS > > > > during the first ever nginx conf in 2014: > > > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > > > > > > > > > https://istlsfastyet.com/ > > > > > > It's just Ilya's opinion. You are free to agree or not. > > > > > > > > > > > > > > Thus, there is no reason for not going full-HTTPS in > delivering Web pages. > > > > > > There are at least two reasons to not use HTTPS: > > > > > > 1. Provide easy access to information for people, who > can't > > > use encryption > > > by political, legal, or technical reasons. > > > > > > 2. Don't waste resources on encryption, and thus save our > > > planet. > > > > > > Please, don't be a TLS despot and let people to have a > > > choice to use encryption > > > or not. > > > > > > I think the situation when I can't download new version of > > > OpenSSL using old > > > version of OpenSSL is ridiculous, but they have configured > > > openssl.org > > that way. > > > How I supposed to use Internet then? > > > > > > wbr, Valentin V. Bartenev > > > > > > > > > -- > > Maxim Konovalov > > Join us at nginx.conf, Sept. 7-9, Austin, TX: > > http://nginx.com/nginxconf > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Aug 22 17:30:42 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 22 Aug 2016 20:30:42 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: On 8/22/16 8:23 PM, Richard Stanway wrote: > See https://nginx.org/en/linux_packages.html#stable > > PGP key links are hard coded to http URLs: > >

> For Debian/Ubuntu, in order to authenticate the nginx repository > signature > and to eliminate warnings about missing PGP key during installation > of the > nginx package, it is necessary to add the key used to sign the nginx > packages and repository to the apt program keyring. > Please download this > key from our web site, and add it to the apt > program keyring with the following command: >

> Yes, I see. It should be fixed. Thanks. > On Mon, Aug 22, 2016 at 7:19 PM, Maxim Konovalov > wrote: > > On 8/22/16 8:15 PM, Richard Stanway wrote: > > Could you at least fix the https download page, so it doesn't > > directly link to a HTTP PGP key? > > > It works correctly: https://nginx.org/en/download.html > > > > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov > > >> wrote: > > > > On 8/22/16 7:41 PM, B.R. wrote: > > > The problem is, if the GPG key is served through HTTP, > there is no > > > way to authenticate it, since it could be compromised > through > > MITM. > > > I am very surprised to see myself being qualified as 'HTTPS > > despot' > > > when I just spot the obvious. > > > > > But it does not -- our PGP key distributed through a number of > > channels, including HTTPS. Problem solved, case closed? > > > > > Compromised repository + GPG key is one very powerful way of > > > impersonating another product. > > > > > > TLS provides both encryption and authentication, based > on the > > > initial shared circle of trust. > > > Thus you certify the GPG key is authentic and thus, if > it verifies > > > the binaries, you ensure the delivered package are > produced by the > > > owner of the key, in the end the real author. > > > > > > In 2016, stating that content served over HTTP is 'secure' > > blows my > > > mind and kills your credibility. > > > > > Who did that? What's his name? > > > > > ?Now, as Richard pointed out, if you truly believe you > need to > > > provide HTTP-only, you can. It would be better if it was > in a very > > > visible fashion, though?. > > > Where was despotism, again? > > > > nginx.org already > has HTTPS therefore it is > > not HTTP-only. > > > > > --- > > > *B. R.* > > > > > > On Mon, Aug 22, 2016 at 5:40 PM, Richard Stanway > > > > > > > > > >>> wrote: > > > > > > 1. You could provide insecure.nginx.org > > > > mirror for such people, make > > > nginx.org > secure by > > default. > > > > > > 2. Modern server CPUs are already extremely energy efficient, > > > TLS adds negligible load. See https://istlsfastyet.com/ > > > > > > > > > > > > On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev > > > > > > > > >>> wrote: > > > > > > On Sunday 21 August 2016 15:56:09 B.R. wrote: > > > > It is surprising, since I remember Ilya Grigorik made a talk about TLS > > > > during the first ever nginx conf in 2014: > > > > https://www.youtube.com/watch?v=iHxD-G0YjiU > > > > > > > > > >> > > > > https://istlsfastyet.com/ > > > > > > It's just Ilya's opinion. You are free to agree or not. > > > > > > > > > > > > > > Thus, there is no reason for not going full-HTTPS in delivering Web pages. > > > > > > There are at least two reasons to not use HTTPS: > > > > > > 1. Provide easy access to information for people, who can't > > > use encryption > > > by political, legal, or technical reasons. > > > > > > 2. Don't waste resources on encryption, and thus save our > > > planet. > > > > > > Please, don't be a TLS despot and let people to have a > > > choice to use encryption > > > or not. > > > > > > I think the situation when I can't download new version of > > > OpenSSL using old > > > version of OpenSSL is ridiculous, but they have configured > > > openssl.org > > > that way. > > > How I supposed to use Internet then? > > > > > > wbr, Valentin V. Bartenev > > > > > > > > > -- > > Maxim Konovalov > > Join us at nginx.conf, Sept. 7-9, Austin, TX: > > http://nginx.com/nginxconf > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: > http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From designerfh at yahoo.com Mon Aug 22 17:51:32 2016 From: designerfh at yahoo.com (Jason Tuck) Date: Mon, 22 Aug 2016 17:51:32 +0000 (UTC) Subject: SSO with Auth_Request References: <1320628987.365405.1471888292745.ref@mail.yahoo.com> Message-ID: <1320628987.365405.1471888292745@mail.yahoo.com> Hi All,? ?I'm trying to implement SSO similar to this:?https://developers.shopware.com/blog/2015/03/02/sso-with-nginx-authrequest-module/?however I am using node/passport/azure-ad for my authentication service.? The issue I am running into is - how do I get the originally requested route /app1 when the subrequest returns a 401? I'd like to pass that along to the passport.js middleware as a parameter so it will redirect me properly after authentication (which involves several redirects).? server {? ????listen 80;? ????server_name localhost;? ????error_page 401 /login;? ????location /login {? ????????set $app //this is where I get stuck? ????????rewrite ^/login http://localhost:3200/login?appUrl=$app;? ????}? ????location /app1 {? ????????root /var/www/html;? ????????index index.html index.htm index.nginx-debian.html;? ????????auth_request /auth;? ????}? ????location /auth {? ????????proxy_pass http://localhost:3200/auth;? ????????proxy_pass_request_body off;? ????????proxy_set_header Content-Length "";? ????}? }? I've tried returning the value from node as a custom header, tried $upstream_http_, $sent_http_, $http_,? Tried storing it as a session variable, but express sees the subrequest as a different session than navigating directly, etc.? I've gone through the past couple years on the mailing list archive and didnt see anything.? Any help would be appreciated!? Thanks? Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkdimitrijevic at gmail.com Tue Aug 23 08:23:23 2016 From: nkdimitrijevic at gmail.com (Nenad Dimitrijevic) Date: Tue, 23 Aug 2016 10:23:23 +0200 Subject: Neo4j HA cluster with Nginx Message-ID: I found this articl how to do it with HA proxy: http://fossies.org/linux/neo4j/enterprise/ha/src/docs/dev/haproxy.asciidoc Is there any reading or example about how to configure Neo4j HA cluster with Nginx using bolt protocol? -- m -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Aug 23 10:55:30 2016 From: nginx-forum at forum.nginx.org (Trast0) Date: Tue, 23 Aug 2016 06:55:30 -0400 Subject: nginx and ssl Message-ID: <7fd49421189a73983d65ab1a64d5d59c.NginxMailingListEnglish@forum.nginx.org> Hello I'm new to the world nginx and I'm trying to set up a web server. I'm probably making rookie mistakes, apologize in advance The problem I have is that the server translates https addresses, for example https://webdomain at http://webdomain:443 is this logical?, what is my error? Thank you very much Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269112,269112#msg-269112 From reallfqq-nginx at yahoo.fr Tue Aug 23 13:15:10 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 23 Aug 2016 15:15:10 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: > > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov wrote: > On 8/22/16 7:41 PM, B.R. wrote: > > In 2016, stating that content served over HTTP is 'secure' blows my > > mind and kills your credibility. > > > Who did that? What's his name? > ?Someone named 'Maxim Konovalov'?. Sounds familiar? See below: ??On Mon, Aug 22, 2016 at 5:44 PM, Maxim Konovalov wrote: > On 8/22/16 6:40 PM, Richard Stanway wrote: > > 1. You could provide insecure.nginx.org > > mirror for such people, make nginx.org secure by > > default. > > > No, thanks. It is secure by default and HTTPS by default doesn't > add any value.? > --- ???On Mon, Aug 22, 2016 at 7:30 PM, Maxim Konovalov wrote: > On 8/22/16 8:23 PM, Richard Stanway wrote: > > See https://nginx.org/en/linux_packages.html#stable > > > > PGP key links are hard coded to http URLs: > [...] > > Please download this > > key > [...] > Yes, I see. It should be fixed. Thanks. > Not from my side: I? still see HTTP links on the following webpage: nginx.org/en/linux_packages.html, both in the HTTP & HTTPS versions (2 'this key' links, 1 'nginx signing key'). ?Also true for keys delivered on http://nginx.org/en/pgp_keys.html. There might ?be some other places, though. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Aug 23 13:31:39 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 23 Aug 2016 16:31:39 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: <572edfca-c2eb-7b50-48d9-fff993203ee5@nginx.com> On 8/23/16 4:15 PM, B.R. wrote: > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov > > wrote: > On 8/22/16 7:41 PM, B.R. wrote: > > In 2016, stating that content served over HTTP is 'secure' blows my > > mind and kills your credibility. > > > Who did that? What's his name? > > ?Someone named 'Maxim Konovalov'?. Sounds familiar? Let me repeat: nginx.org supports HTTPS. I don't think it adds any measurable security here but it's matter of religion but you can use it for free if you think it does. > See below: > > ??On Mon, Aug 22, 2016 at 5:44 PM, Maxim Konovalov > > wrote: > On 8/22/16 6:40 PM, Richard Stanway wrote: > > 1. You could provide insecure.nginx.org > > > mirror for such people, make nginx.org secure by > > default. > > > No, thanks. It is secure by default and HTTPS by default doesn't > add any value.? > > > --- > > ???On Mon, Aug 22, 2016 at 7:30 PM, Maxim Konovalov > > wrote: > On 8/22/16 8:23 PM, Richard Stanway wrote: > > See https://nginx.org/en/linux_packages.html#stable > > > > PGP key links are hard coded to http URLs: > [...] > > Please download this > > key > [...] > Yes, I see. It should be fixed. Thanks. > > > Not from my side: I? still see HTTP links on the following webpage: > nginx.org/en/linux_packages.html > , both in the HTTP & HTTPS > versions (2 'this key' links, 1 'nginx signing key'). > ?Also true for keys delivered on http://nginx.org/en/pgp_keys.html. > There might ?be some other places, though. > --- > *B. R.* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From daniel at mostertman.org Tue Aug 23 14:31:43 2016 From: daniel at mostertman.org (=?UTF-8?Q?Dani=c3=abl_Mostertman?=) Date: Tue, 23 Aug 2016 16:31:43 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: <572edfca-c2eb-7b50-48d9-fff993203ee5@nginx.com> References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> <572edfca-c2eb-7b50-48d9-fff993203ee5@nginx.com> Message-ID: <238b01f8-aa44-5210-ac00-48645b6d7630@mostertman.org> On 2016-08-23 15:31, Maxim Konovalov wrote: > Let me repeat: nginx.org supports HTTPS. > I don't think it adds any measurable security here but it's matter > of religion but you can use it for free if you think it does. +1 Although it would be chique if nginx.org would advertise a HSTS-header so that next requests are over HTTPS if a browser supports it. You could also opt to add it to the HSTS-preload database, which works in all major browsers. Even the initial request goes to HTTPS then. Numerous reasons to support the unencrypted version have already been given, and (high) encryption is offered. In my opinion you should offer encrypted and unencrypted over the same address, and use technologies like these to make capable browsers that prefer encryption, use that by default through this way. Do not simply force encryption on the main site, there's simply no need in this day and age. A lot of companies have thought about this before, including major browser developers. Since those are the ones we serve websites too, it shouldn't take too much effort to convince people that they might have a point with doing it this way. You can also consider enabling DNSSEC-support for nginx.org, which also makes your recursors able to validate nginx.org (and therefore downloads and signature validation from). You can then also mitigate MITM attacks, without encryption enabled. As for speed, TLS with nginx is pretty fast, especially with other technologies to quickly push through more requests. Not same level as unencrypted connections, but it's -certainly with hardware AES-support in most CPU's- not that big of a deal anymore for most sites. Just my ? 0,02 From nginx-forum at forum.nginx.org Tue Aug 23 16:11:13 2016 From: nginx-forum at forum.nginx.org (gurumurthi84) Date: Tue, 23 Aug 2016 12:11:13 -0400 Subject: constant bit rate delivery for reverse proxy Message-ID: Team, we are trying to integrate nginx as a mid-tier caching solution for our Content Delivery Network which actually delivers MPEG TS video data to user set top boxes which basically works with the constant bit rate between end to end. our deployment is something like below cluster of edge devices interfacing with STB which tries to pull data through Http from Persistent storage cluster of devices, we are trying to add up with a mid-tier caching for an region so that we can serve multiple streams with improvement over the performance. The problem we are facing currently is, as our Edge devices are adhering to the constant bit rate/committed bit rate delivery from the upstream we were unable to achieve the same with nginx which is proxying the Persistent store devices which is causing our edge device to starve for data at the expected time to be delivered for STB. we are having nginx slicing module in place with 5 mb slice size configured on version 1.9.12., and tcp captures showed us nginx is not deliverying data even though enough receive window is available and there are no network congestion. We made to fill up the upstream in proxy pass with a best effort and which resolved the issue for an single stream. But we are seeing issues while we try to scale with more streams with 100+ streams. let us know any good suggestions on making nginx to adhere/limit-at that rate which requested by client Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269133,269133#msg-269133 From lists at lazygranch.com Tue Aug 23 17:07:56 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 23 Aug 2016 10:07:56 -0700 Subject: Problems with custom log file format In-Reply-To: <20160821190204.18aba959@linux-h57q.site> References: <20160821190204.18aba959@linux-h57q.site> Message-ID: <20160823170756.5468238.21249.9148@lazygranch.com> Looks like I have no takers on this problem. Should I filed a bug report? If so, where? ? Original Message ? From: lists at lazygranch.com Sent: Sunday, August 21, 2016 7:02 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Problems with custom log file format Nginx 1.10.1,2 FreeBSD 10.2-RELEASE-p18 #0: Sat May 28 08:53:43 UTC 2016 I'm using the "map" module to detect obvious hacking by detecting keywords. (Yes, I know about Naxsi.) Finding the really dumb hacks is easy. I give them a 444 return code with the idea being I can run a script on the log file and block these IPs. (Yes, I know about swatch.) My problem is the access.log doesn't get formatted all the time. I have many examples, but this is representative. First group has 444 at the start of the line (custom format). The next group uses the default format. ---------------------------------- 444 111.91.62.144 - - [21/Aug/2016:09:31:50 +0000] "GET /wp-login.php HTTP/1.1" 0 "-" "Mozilla/5.0 (Windows NT 6.1; WO W64; rv:40.0) Gecko/20100101 Firefox/40.1" "-" 444 175.123.98.240 - - [21/Aug/2016:04:39:44 +0000] "GET /manager/html HTTP/1.1" 0 "-" "Mozilla/5.0 (Windows NT 5.1; r v:5.0) Gecko/20100101 Firefox/5.0" "-" 444 103.253.14.43 - - [21/Aug/2016:05:43:15 +0000] "GET /admin/config.php HTTP/1.1" 0 "-" "python-requests/2.10.0" "-" 444 185.130.6.49 - - [21/Aug/2016:14:23:09 +0000] "GET //phpMyAdmin/scripts/setup.php HTTP/1.1" 0 "-" "-" "-" 176.26.5.107 - - [21/Aug/2016:09:43:20 +0000] "GET /wp-login.php HTTP/1.1" 444 0 "-" "Mozilla/5.0 (Windows NT 6.1; WOW 64; rv:40.0) Gecko/20100101 Firefox/40.1" 195.90.204.103 - - [21/Aug/2016:17:09:11 +0000] "GET /wordpress/wp-admin/ HTTP/1.1" 444 0 "-" "-" -------------------------- I'm putting the return code first to simplify my scripting that I will use to feed blocking in ipfw. My nginx.conf follows (abbreviated). The email may mangle the formatting a bit. ------------- http { log_format main '$status $remote_addr - $remote_user [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main --------------------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Aug 23 17:09:47 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Aug 2016 20:09:47 +0300 Subject: Problems with custom log file format In-Reply-To: <20160823170756.5468238.21249.9148@lazygranch.com> References: <20160821190204.18aba959@linux-h57q.site> <20160823170756.5468238.21249.9148@lazygranch.com> Message-ID: <20160823170947.GW24741@mdounin.ru> Hello! On Tue, Aug 23, 2016 at 10:07:56AM -0700, lists at lazygranch.com wrote: > Looks like I have no takers on this problem. Should I filed a > bug report? If so, where? I would recommend you to start with double-checking your configuration instead. -- Maxim Dounin http://nginx.org/ From lists at lazygranch.com Tue Aug 23 17:27:01 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 23 Aug 2016 10:27:01 -0700 Subject: Problems with custom log file format In-Reply-To: <20160823170947.GW24741@mdounin.ru> References: <20160821190204.18aba959@linux-h57q.site> <20160823170756.5468238.21249.9148@lazygranch.com> <20160823170947.GW24741@mdounin.ru> Message-ID: <20160823172701.5468238.38788.9151@lazygranch.com> Configuration file included in the post. I already checked it. ? Original Message ? From: Maxim Dounin Sent: Tuesday, August 23, 2016 10:10 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Problems with custom log file format Hello! On Tue, Aug 23, 2016 at 10:07:56AM -0700, lists at lazygranch.com wrote: > Looks like I have no takers on this problem. Should I filed a > bug report? If so, where? I would recommend you to start with double-checking your configuration instead. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lucas at slcoding.com Tue Aug 23 17:35:12 2016 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 23 Aug 2016 19:35:12 +0200 Subject: Problems with custom log file format In-Reply-To: <20160823172701.5468238.38788.9151@lazygranch.com> References: <20160821190204.18aba959@linux-h57q.site> <20160823170756.5468238.21249.9148@lazygranch.com> <20160823170947.GW24741@mdounin.ru> <20160823172701.5468238.38788.9151@lazygranch.com> Message-ID: <57BC8950.4030206@slcoding.com> 1st one matches your log format, the 2nd one matches the `combined` format - so: - Maybe you've still some old nginx processes that are still not closed after reloading, which causes it to log in the combined format, or can it be another vhost logging to same log file, without using access_log .... main ? Can you paste your *full* config?, that allows for easier debugging. lists at lazygranch.com wrote: > Configuration file included in the post. I already checked it. > > > Original Message > From: Maxim Dounin > Sent: Tuesday, August 23, 2016 10:10 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: Problems with custom log file format > > Hello! > > On Tue, Aug 23, 2016 at 10:07:56AM -0700, lists at lazygranch.com wrote: > >> Looks like I have no takers on this problem. Should I filed a >> bug report? If so, where? > > I would recommend you to start with double-checking your > configuration instead. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Aug 23 17:51:55 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 23 Aug 2016 20:51:55 +0300 Subject: Problems with custom log file format In-Reply-To: <20160823172701.5468238.38788.9151@lazygranch.com> References: <20160821190204.18aba959@linux-h57q.site> <20160823170756.5468238.21249.9148@lazygranch.com> <20160823170947.GW24741@mdounin.ru> <20160823172701.5468238.38788.9151@lazygranch.com> Message-ID: > Configuration file included in the post. I already checked it. You have shown only few excerpts (like there might be other access_log directives in other parts included config files (easily missed when doing include path/*.conf) etc). For example if you can reproduce the issue with such config (I couldn't) there might be a bug in the software: events {} http { log_format main '$status $remote_addr - $remote_user [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; server {} } rr From lists at lazygranch.com Tue Aug 23 18:55:42 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 23 Aug 2016 11:55:42 -0700 Subject: Problems with custom log file format In-Reply-To: References: <20160821190204.18aba959@linux-h57q.site> <20160823170756.5468238.21249.9148@lazygranch.com> <20160823170947.GW24741@mdounin.ru> <20160823172701.5468238.38788.9151@lazygranch.com> Message-ID: <20160823115542.5887938e@linux-h57q.site> Link goes to conf file https://www.dropbox.com/s/1gz5139s4q3b7e0/nginx.conf?dl=0 On Tue, 23 Aug 2016 20:51:55 +0300 "Reinis Rozitis" wrote: > > Configuration file included in the post. I already checked it. > > You have shown only few excerpts (like there might be other > access_log directives in other parts included config files (easily > missed when doing include path/*.conf) etc). > > For example if you can reproduce the issue with such config (I > couldn't) there might be a bug in the software: > > events {} > http { > log_format main '$status $remote_addr - $remote_user > [$time_local] "$request" ' '$body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for"'; > access_log logs/access.log main; > server {} > } > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Tue Aug 23 20:30:52 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 23 Aug 2016 23:30:52 +0300 Subject: Problems with custom log file format In-Reply-To: <20160823115542.5887938e@linux-h57q.site> References: <20160821190204.18aba959@linux-h57q.site><20160823170756.5468238.21249.9148@lazygranch.com><20160823170947.GW24741@mdounin.ru><20160823172701.5468238.38788.9151@lazygranch.com> <20160823115542.5887938e@linux-h57q.site> Message-ID: <14B4336508064302A8748DE3323E5385@NeiRoze> > Link goes to conf file > https://www.dropbox.com/s/1gz5139s4q3b7e0/nginx.conf?dl=0 On line 324 you have (for the lazygranch.xyz virtualhost): access_log /var/log/nginx/access.log; Without the specified format so ses the default. rr From r at roze.lv Tue Aug 23 20:32:05 2016 From: r at roze.lv (Reinis Rozitis) Date: Tue, 23 Aug 2016 23:32:05 +0300 Subject: Problems with custom log file format In-Reply-To: <14B4336508064302A8748DE3323E5385@NeiRoze> References: <20160821190204.18aba959@linux-h57q.site><20160823170756.5468238.21249.9148@lazygranch.com><20160823170947.GW24741@mdounin.ru><20160823172701.5468238.38788.9151@lazygranch.com> <20160823115542.5887938e@linux-h57q.site> <14B4336508064302A8748DE3323E5385@NeiRoze> Message-ID: <73481213493A4340911F8659AC4C18BE@NeiRoze> > Without the specified format so ses the default. .. it uses the default. rr From lists at lazygranch.com Tue Aug 23 21:01:24 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 23 Aug 2016 14:01:24 -0700 Subject: Problems with custom log file format In-Reply-To: <73481213493A4340911F8659AC4C18BE@NeiRoze> References: <20160821190204.18aba959@linux-h57q.site><20160823170756.5468238.21249.9148@lazygranch.com><20160823170947.GW24741@mdounin.ru><20160823172701.5468238.38788.9151@lazygranch.com> <20160823115542.5887938e@linux-h57q.site> <14B4336508064302A8748DE3323E5385@NeiRoze> <73481213493A4340911F8659AC4C18BE@NeiRoze> Message-ID: <20160823210124.5468238.14254.9180@lazygranch.com> Thanks. I will fix the line and report back.? I'm not really using the xyz tld. The lines should be commented out. I'm just using the com. But I will check that as well ? Original Message ? From: Reinis Rozitis Sent: Tuesday, August 23, 2016 1:32 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Problems with custom log file format > Without the specified format so ses the default. .. it uses the default. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Aug 24 00:27:09 2016 From: nginx-forum at forum.nginx.org (fengli) Date: Tue, 23 Aug 2016 20:27:09 -0400 Subject: Support per addon C++ flags Message-ID: <493b69d628f7547112b7f21a60795fb6.NginxMailingListEnglish@forum.nginx.org> I'm trying to compile my module with C++ with additional flags, more specific, -std=c++11. I want to get my module use the standard nginx build scripts, like auto/configure, etc. However, looks like nginx only allows C++ options on global level. So, if I specify the -std=c++11 in --with_cc_opt, other module may complain. And it gets applied on other non-C++ files, like a .c files. Unfortunately, nginx apply -Werror to stop on any warning, so, the build process will stop at c files with error when -std=c++11 is given. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269147,269147#msg-269147 From nginx-forum at forum.nginx.org Wed Aug 24 10:29:32 2016 From: nginx-forum at forum.nginx.org (Teenabiswas) Date: Wed, 24 Aug 2016 06:29:32 -0400 Subject: SPDY + HTTP/2 In-Reply-To: References: Message-ID: <62ac5e3da36e2e48887700ec04640780.NginxMailingListEnglish@forum.nginx.org> I am searching code for bulk SMS http api please suggest any code ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263245,269149#msg-269149 From nginx-forum at forum.nginx.org Wed Aug 24 10:59:22 2016 From: nginx-forum at forum.nginx.org (beatnut) Date: Wed, 24 Aug 2016 06:59:22 -0400 Subject: Too many open files when reloading - Debian Jessie Message-ID: <35384d486deada0d3e6df83482fc0fad.NginxMailingListEnglish@forum.nginx.org> Hello, My nginx was build from source on Debian Jessie --prefix=/etc/nginx --modules-path=/usr/lib/nginx/modules --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/tmp/client --http-proxy-temp-path=/var/cache/nginx/tmp/proxy --http-fastcgi-temp-path=/var/cache/nginx/tmp/fastcgi --http-scgi-temp-path=/var/cache/nginx/tmp/scgi --http-uwsgi-temp-path=/var/cache/nginx/tmp/uwsgi --with-http_v2_module --with-http_stub_status_module --with-http_realip_module --with-http_ssl_module --with-http_secure_link_module --with-http_geoip_module=dynamic --user=nginx --group=nginx nginx.conf: worker_rlimit_nofile 8192; As root ulimit -n 65536 When starting /etc/init.d/nginx start - everything is ok lsof -u nginx|wc -l 3333 , but when reloading i get [emerg] 18662#0: open() "/var/log/nginx/access/foo.log" failed (24: Too many open files) lsof -u nginx|wc -l 2776 Master proces cat /proc/21101/limits Max open files 1024 4096 files One of workers cat /proc/21102/limits Max open files 8192 8192 files I can't figure why this problem occurs when reloading and not when starting. How to avoid it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269150,269150#msg-269150 From nginx-forum at forum.nginx.org Wed Aug 24 11:36:57 2016 From: nginx-forum at forum.nginx.org (beatnut) Date: Wed, 24 Aug 2016 07:36:57 -0400 Subject: Too many open files when reloading - Debian Jessie In-Reply-To: <35384d486deada0d3e6df83482fc0fad.NginxMailingListEnglish@forum.nginx.org> References: <35384d486deada0d3e6df83482fc0fad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7fd9374751d0fabc5c56ae0701777b75.NginxMailingListEnglish@forum.nginx.org> I've just found a solution described here http://serverfault.com/questions/770037/debian-8-4-jessie-set-open-files-limit-for-redis-user It works Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269150,269151#msg-269151 From maxim at nginx.com Wed Aug 24 11:58:53 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 24 Aug 2016 14:58:53 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: On 8/22/16 8:30 PM, Maxim Konovalov wrote: > On 8/22/16 8:23 PM, Richard Stanway wrote: >> See https://nginx.org/en/linux_packages.html#stable >> >> PGP key links are hard coded to http URLs: >> >>

>> For Debian/Ubuntu, in order to authenticate the nginx repository >> signature >> and to eliminate warnings about missing PGP key during installation >> of the >> nginx package, it is necessary to add the key used to sign the nginx >> packages and repository to the apt program keyring. >> Please download this >> key from our web site, and add it to the apt >> program keyring with the following command: >>

>> > Yes, I see. It should be fixed. Thanks. > Fixed now. Thanks. -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From shirley at nginx.com Wed Aug 24 16:17:05 2016 From: shirley at nginx.com (Shirley Bailes) Date: Wed, 24 Aug 2016 09:17:05 -0700 Subject: Join us next month at nginx.conf 2016 Message-ID: Hello all! We hope that you?ll join us next month for the upcoming NGINX user conference, nginx.conf 2016 , September 7-9 at the Hilton Austin in Austin,Texas. There are a lot of amazing talks from people like you who are building cool shi+ with NGINX. Our guest speakers at nginx.conf 2016 will help you learn how to: - Build a high-performance app architecture to support large numbers of concurrent users - Achieve zero downtime, even when you are moving apps to the cloud - Make continuous delivery faster and easier - Utilize HTTPS, web encryption, and more to protect and secure your sites and apps - Deploy and optimize containers in production - Gain deep insights into what?s happening in your environment - Design, develop, and deploy scalable microservices architectures See the full list of speakers and topics here . Don?t forget about the community member discount. Please use and share this discount code to get over $400 off conference tickets: NG15ORG See you soon in Austin! *s Join us at nginx.conf , Sept. 7-9, Austin, TX Register now with NGINX for over $400 off Shirley Bailes Director, Event Marketing Mobile: 707.569.4888 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Aug 24 17:54:43 2016 From: nginx-forum at forum.nginx.org (Amanat) Date: Wed, 24 Aug 2016 13:54:43 -0400 Subject: nginx a NIGHTMARE for me Message-ID: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> i was using Apache from last 3 years. never faced a single problem. Few days ago i think to try Nginx. As i heard from mny people. Its very fast memory efficient webserver. Trust me guys. Nginx can never be memory efficient. Rather it consumes memory same like my mozilla. I never understand for what reason it creates a garbage of disk cache for every request. look at the pic http://i.imgur.com/pD7rEDe.png Though i like some features of Nginx like header modification and error 444. Apart from that Nginx is worst. My Server configuration: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Model name: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz Stepping: 9 CPU MHz: 1601.453 CPU max MHz: 3800.0000 CPU min MHz: 1600.0000 BogoMIPS: 6784.50 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7 I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram works only for 5 min. If any of you have any suggestion to tweak nginx config. Please tell me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269159,269159#msg-269159 From rpaprocki at fearnothingproductions.net Wed Aug 24 18:00:44 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 24 Aug 2016 11:00:44 -0700 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sounds like you just want to complain without actually solving any problems. Why don't you start by pasting the output of nginx -V your FULL config file the requests you are sending to Nginx, and the behavior you expect to see And adjust your attitude your attitude so you can actually receive help, and not just complain because you think something is terrible. On Wed, Aug 24, 2016 at 10:54 AM, Amanat wrote: > i was using Apache from last 3 years. never faced a single problem. Few > days > ago i think to try Nginx. As i heard from mny people. Its very fast memory > efficient webserver. Trust me guys. Nginx can never be memory efficient. > Rather it consumes memory same like my mozilla. I never understand for what > reason it creates a garbage of disk cache for every request. > look at the pic > > http://i.imgur.com/pD7rEDe.png > > Though i like some features of Nginx like header modification and error > 444. > > Apart from that Nginx is worst. > > My Server configuration: > Architecture: x86_64 > CPU op-mode(s): 32-bit, 64-bit > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 2 > Core(s) per socket: 4 > Socket(s): 1 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 58 > Model name: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz > Stepping: 9 > CPU MHz: 1601.453 > CPU max MHz: 3800.0000 > CPU min MHz: 1600.0000 > BogoMIPS: 6784.50 > Virtualization: VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 8192K > NUMA node0 CPU(s): 0-7 > > > I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram > works > only for 5 min. > > If any of you have any suggestion to tweak nginx config. Please tell me. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269159,269159#msg-269159 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Wed Aug 24 18:01:39 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 24 Aug 2016 14:01:39 -0400 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: You probably have some module leaking memory - send output of 'nginx -V' so people can see what version you have and what modules are there. On Wed, Aug 24, 2016 at 1:54 PM, Amanat wrote: > i was using Apache from last 3 years. never faced a single problem. Few > days > ago i think to try Nginx. As i heard from mny people. Its very fast memory > efficient webserver. Trust me guys. Nginx can never be memory efficient. > Rather it consumes memory same like my mozilla. I never understand for what > reason it creates a garbage of disk cache for every request. > look at the pic > > http://i.imgur.com/pD7rEDe.png > > Though i like some features of Nginx like header modification and error > 444. > > Apart from that Nginx is worst. > > My Server configuration: > Architecture: x86_64 > CPU op-mode(s): 32-bit, 64-bit > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 2 > Core(s) per socket: 4 > Socket(s): 1 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 58 > Model name: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz > Stepping: 9 > CPU MHz: 1601.453 > CPU max MHz: 3800.0000 > CPU min MHz: 1600.0000 > BogoMIPS: 6784.50 > Virtualization: VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 8192K > NUMA node0 CPU(s): 0-7 > > > I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram > works > only for 5 min. > > If any of you have any suggestion to tweak nginx config. Please tell me. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269159,269159#msg-269159 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.talvensaari at gmail.com Wed Aug 24 18:14:38 2016 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Wed, 24 Aug 2016 21:14:38 +0300 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wednesday, 24 August 2016, Amanat wrote: > i was using Apache from last 3 years. never faced a single problem. Few > days > ago i think to try Nginx. As i heard from mny people. Its very fast memory > efficient webserver. Trust me guys. Nginx can never be memory efficient. > Usually when people complain about these they are running something behind nginx, e.g. FastCGI (e.g. PHP-FPM) or just proxying the traffic. Then they start adding more workers, and generally more everything on nginx.conf without knowing that the poblem is in a backend service. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Aug 24 19:45:47 2016 From: nginx-forum at forum.nginx.org (fengli) Date: Wed, 24 Aug 2016 15:45:47 -0400 Subject: Support per addon C++ flags In-Reply-To: <493b69d628f7547112b7f21a60795fb6.NginxMailingListEnglish@forum.nginx.org> References: <493b69d628f7547112b7f21a60795fb6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <611d7ce76fae76598bc3b56447c96345.NginxMailingListEnglish@forum.nginx.org> Update with my findings. Nginx auto/configure will invoke config.make in the addon's directory. It's get invoked after the part of Makefile get generated, thus addon developer can use config.make to modify the Makefile generated by nginx's configure command line. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269147,269165#msg-269165 From reallfqq-nginx at yahoo.fr Wed Aug 24 19:51:09 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 24 Aug 2016 21:51:09 +0200 Subject: Too many open files when reloading - Debian Jessie In-Reply-To: <7fd9374751d0fabc5c56ae0701777b75.NginxMailingListEnglish@forum.nginx.org> References: <35384d486deada0d3e6df83482fc0fad.NginxMailingListEnglish@forum.nginx.org> <7fd9374751d0fabc5c56ae0701777b75.NginxMailingListEnglish@forum.nginx.org> Message-ID: ... or get rid of systemd and its habit of doing everything in-house and being often not compatible with 3rd-party mechanisms. ?Not? nginx-relatedn though. --- *B. R.* On Wed, Aug 24, 2016 at 1:36 PM, beatnut wrote: > I've just found a solution described here > http://serverfault.com/questions/770037/debian-8-4- > jessie-set-open-files-limit-for-redis-user > > It works > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269150,269151#msg-269151 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Aug 24 19:59:27 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 24 Aug 2016 21:59:27 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: HTTPS was supported, but internal links were systematically served over HTTP. Without considering any religion, this problem is now fixed. As per your political decision on serving content (un)encrypted, it is *in fine* your choice and it has been noted. Power users already knew about HTTPS anyway. Thanks Maxim. --- *B. R.* On Wed, Aug 24, 2016 at 1:58 PM, Maxim Konovalov wrote: > On 8/22/16 8:30 PM, Maxim Konovalov wrote: > > On 8/22/16 8:23 PM, Richard Stanway wrote: > >> See https://nginx.org/en/linux_packages.html#stable > >> > >> PGP key links are hard coded to http URLs: > >> > >>

> >> For Debian/Ubuntu, in order to authenticate the nginx repository > >> signature > >> and to eliminate warnings about missing PGP key during installation > >> of the > >> nginx package, it is necessary to add the key used to sign the nginx > >> packages and repository to the apt program keyring. > >> Please download this > >> key from our web site, and add it to the apt > >> program keyring with the following command: > >>

> >> > > Yes, I see. It should be fixed. Thanks. > > > Fixed now. Thanks. > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Aug 24 20:05:00 2016 From: r at roze.lv (Reinis Rozitis) Date: Wed, 24 Aug 2016 23:05:00 +0300 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <166F392C3156418CABD4F4D9D9DB0C4F@NeiRoze> > Trust me guys. Nginx can never be memory efficient. Interesting way to start an email. > I never understand for what reason it creates a garbage of disk cache for > every request. By default nginx doesn?t cache anything the OS/Linux does it to reduce the diskreads. You can adjust it by tuning the 'vm.pagecache' and related sysctl variables. > look at the pic http://i.imgur.com/pD7rEDe.png The pic doesn?t show anything specific to nginx except that you have 32 Gb of ram of what ~23 Gb are free ~7.7Gb are linux filesystem cache and something like ~2GB is used for apps etc. > If any of you have any suggestion to tweak nginx config. Please tell me. To see what actually consumes the ram whole processlist would be needed and as people stated - also the output of nginx -V. rr From nginx-forum at forum.nginx.org Wed Aug 24 23:23:43 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Wed, 24 Aug 2016 19:23:43 -0400 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9eb9b9260fc15477550dc69f0e2f8690.NginxMailingListEnglish@forum.nginx.org> Amanat Wrote: ------------------------------------------------------- > I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram > works only for 5 min. Oh, I wonder how I'm using nginx with PHP on servers with 2GB of RAM and no swap... uptimes are in the hundreds days (reboots only for security updates). > If any of you have any suggestion to tweak nginx config. Please tell > me. Like others people asked: please provide your nginx compile options, parameters, what kind of backend you're using,... Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269159,269172#msg-269172 From nginx-forum at forum.nginx.org Thu Aug 25 03:11:52 2016 From: nginx-forum at forum.nginx.org (serendipity30) Date: Wed, 24 Aug 2016 23:11:52 -0400 Subject: Request Compression Message-ID: <7fd6efff0272640cda5912457a8c1938.NginxMailingListEnglish@forum.nginx.org> Hello, This has been asked 3 years back & I would like to know if newer latest version of Nginx supports request compression or not. If yes, how to do so? Note: I'm not referring to response compression which is done by gzip. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269174,269174#msg-269174 From anoopalias01 at gmail.com Thu Aug 25 03:22:30 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 25 Aug 2016 08:52:30 +0530 Subject: nginx a NIGHTMARE for me In-Reply-To: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: A bad workman always blames his tools ! On Wed, Aug 24, 2016 at 11:24 PM, Amanat wrote: > i was using Apache from last 3 years. never faced a single problem. Few days > ago i think to try Nginx. As i heard from mny people. Its very fast memory > efficient webserver. Trust me guys. Nginx can never be memory efficient. > Rather it consumes memory same like my mozilla. I never understand for what > reason it creates a garbage of disk cache for every request. > look at the pic > > http://i.imgur.com/pD7rEDe.png > > Though i like some features of Nginx like header modification and error > 444. > > Apart from that Nginx is worst. > > My Server configuration: > Architecture: x86_64 > CPU op-mode(s): 32-bit, 64-bit > Byte Order: Little Endian > CPU(s): 8 > On-line CPU(s) list: 0-7 > Thread(s) per core: 2 > Core(s) per socket: 4 > Socket(s): 1 > NUMA node(s): 1 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 58 > Model name: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz > Stepping: 9 > CPU MHz: 1601.453 > CPU max MHz: 3800.0000 > CPU min MHz: 1600.0000 > BogoMIPS: 6784.50 > Virtualization: VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 8192K > NUMA node0 CPU(s): 0-7 > > > I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram works > only for 5 min. > > If any of you have any suggestion to tweak nginx config. Please tell me. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269159,269159#msg-269159 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias From nginx-forum at forum.nginx.org Thu Aug 25 05:37:44 2016 From: nginx-forum at forum.nginx.org (henry_nginx_profile) Date: Thu, 25 Aug 2016 01:37:44 -0400 Subject: NGINX SSL configuration Message-ID: <6e4e52f31a7c37b6a216be1922dab1a1.NginxMailingListEnglish@forum.nginx.org> hello,i am come from china. i use NGINX in a short period of time. i have some confuse about NGINX's ssl_* directive. i have two vhost conf file, the above is my configuration: a.conf: server { listen 443 ssl; server_name a.example.com; ssl_protocols TLSv1.2; ... } b.conf { listen 443 ssl; server_name b.example.com; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ... } my problem: i test these two web site use curl tools, "a.example.com" is using TLSv1.2 protocol, this is ok, but when i testing "b.example.com" that only support TLS1.2 too, it seems like b.conf 's ssl_protocols directive is not effective, only a.conf's ssl_protools directive effective. my question: 1.Dose ssl_protocols directive is only be parser once by NGINX? something like NGINX read config file, that find out a.conf's ssl_protocols directive and record it, the below ssl_protocol directive will be pass? 2.if question 1 is yes, how can i write difference ssl_* directive in multi vhost? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269177,269177#msg-269177 From maxim at nginx.com Thu Aug 25 08:44:09 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 25 Aug 2016 11:44:09 +0300 Subject: No HTTPS on nginx.org by default In-Reply-To: References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> Message-ID: <4c61a298-69a5-0478-b790-5310ce89ffdd@nginx.com> On 8/24/16 10:59 PM, B.R. wrote: > HTTPS was supported, but internal links were systematically served > over HTTP. Right -- this happens because long time nginx.org was HTTP only. I agree, that here are still some leftovers that should be fixed. I am sorry that we are not perfect. > Without considering any religion, this problem is now fixed. > > As per your political decision on serving content (un)encrypted, it > is /in fine/ your choice and it has been noted. > Power users already knew about HTTPS anyway. > > Thanks Maxim. > --- > *B. R.* > > On Wed, Aug 24, 2016 at 1:58 PM, Maxim Konovalov > wrote: > > On 8/22/16 8:30 PM, Maxim Konovalov wrote: > > On 8/22/16 8:23 PM, Richard Stanway wrote: > >> See https://nginx.org/en/linux_packages.html#stable > > >> > >> PGP key links are hard coded to http URLs: > >> > >>

> >> For Debian/Ubuntu, in order to authenticate the nginx repository > >> signature > >> and to eliminate warnings about missing PGP key during installation > >> of the > >> nginx package, it is necessary to add the key used to sign the nginx > >> packages and repository to the apt program keyring. > >> Please download this > >> key from our web site, and add it to the apt > >> program keyring with the following command: > >>

> >> > > Yes, I see. It should be fixed. Thanks. > > > Fixed now. Thanks. > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: > http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf From martijn at vmaurik.nl Thu Aug 25 09:47:15 2016 From: martijn at vmaurik.nl (martijn at vmaurik.nl) Date: Thu, 25 Aug 2016 09:47:15 +0000 Subject: nginx a NIGHTMARE for me In-Reply-To: References: <375deee60cd345db95c616f9b80582c7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi I don't know but I had never any issues with NGiNX together with PHP5-FPM. Better as asked show your output of nginx -V it helps to diagnose your issues. The modules which are loaded can also be causing your issues. In contradiction to what you claim, Apache2 together with mod-php5 cause many of my managed high trafic sites to fail. Apart from that its also usefull to check your /etc/nginx/nginx.conf You should also be able to install (or compile) the debug version of nginx which shows you more information about the process. 99.9% of my cases are my own f* up. Another note: if you dont like NGiNX don't use it, there is an reason you want to try it out so as said perfectly before don't blame the tools, rather see what is wrong and look for an fix. Cheers. August 25 2016 5:22 AM, "Anoop Alias" wrote: > A bad workman always blames his tools ! > > On Wed, Aug 24, 2016 at 11:24 PM, Amanat wrote: > >> i was using Apache from last 3 years. never faced a single problem. Few days >> ago i think to try Nginx. As i heard from mny people. Its very fast memory >> efficient webserver. Trust me guys. Nginx can never be memory efficient. >> Rather it consumes memory same like my mozilla. I never understand for what >> reason it creates a garbage of disk cache for every request. >> look at the pic >> >> http://i.imgur.com/pD7rEDe.png >> >> Though i like some features of Nginx like header modification and error >> 444. >> >> Apart from that Nginx is worst. >> >> My Server configuration: >> Architecture: x86_64 >> CPU op-mode(s): 32-bit, 64-bit >> Byte Order: Little Endian >> CPU(s): 8 >> On-line CPU(s) list: 0-7 >> Thread(s) per core: 2 >> Core(s) per socket: 4 >> Socket(s): 1 >> NUMA node(s): 1 >> Vendor ID: GenuineIntel >> CPU family: 6 >> Model: 58 >> Model name: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz >> Stepping: 9 >> CPU MHz: 1601.453 >> CPU max MHz: 3800.0000 >> CPU min MHz: 1600.0000 >> BogoMIPS: 6784.50 >> Virtualization: VT-x >> L1d cache: 32K >> L1i cache: 32K >> L2 cache: 256K >> L3 cache: 8192K >> NUMA node0 CPU(s): 0-7 >> >> I can use 2 Gb ram whole day with apache and with Nginx even 32 gb ram works >> only for 5 min. >> >> If any of you have any suggestion to tweak nginx config. Please tell me. >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269159,269159#msg-269159 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Aug 25 11:54:05 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Aug 2016 14:54:05 +0300 Subject: NGINX SSL configuration In-Reply-To: <6e4e52f31a7c37b6a216be1922dab1a1.NginxMailingListEnglish@forum.nginx.org> References: <6e4e52f31a7c37b6a216be1922dab1a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160825115405.GC24741@mdounin.ru> Hello! On Thu, Aug 25, 2016 at 01:37:44AM -0400, henry_nginx_profile wrote: > hello,i am come from china. i use NGINX in a short period of time. i have > some confuse about NGINX's ssl_* directive. > i have two vhost conf file, the above is my configuration: > > a.conf: > > server { > listen 443 ssl; > server_name a.example.com; > ssl_protocols TLSv1.2; > ... > } > > b.conf { > listen 443 ssl; > server_name b.example.com; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ... > } > my problem: > i test these two web site use curl tools, "a.example.com" is using TLSv1.2 > protocol, this is ok, but when i testing "b.example.com" that only support > TLS1.2 too, it seems like b.conf 's ssl_protocols directive is not > effective, only a.conf's ssl_protools directive effective. The above configuration won't use different SSL protocols, as defined servers are pure virtual and protocol is selected before a server name is know. > my question: > 1.Dose ssl_protocols directive is only be parser once by NGINX? something > like NGINX read config file, that find out a.conf's ssl_protocols directive > and record it, the below ssl_protocol directive will be pass? No. > 2.if question 1 is yes, how can i write difference ssl_* directive in > multi vhost? If you want to use different SSL protocols you have to use different IP addresses for your SSL servers, and configure nginx to distinguish servers based on IP addresses (instead of using name-based virtual servers). Some information about the problem can be found in the "Configuring HTTPS servers" article here: http://nginx.org/en/docs/http/configuring_https_servers.html#name_based_https_servers -- Maxim Dounin http://nginx.org/ From agus.262 at gmail.com Thu Aug 25 12:00:02 2016 From: agus.262 at gmail.com (Agus) Date: Thu, 25 Aug 2016 09:00:02 -0300 Subject: nginx and ssl In-Reply-To: <7fd49421189a73983d65ab1a64d5d59c.NginxMailingListEnglish@forum.nginx.org> References: <7fd49421189a73983d65ab1a64d5d59c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I would say its not logical. But you should share the conf to see whats wrong El ago 23, 2016 7:55 AM, "Trast0" escribi?: > Hello > > I'm new to the world nginx and I'm trying to set up a web server. I'm > probably making rookie mistakes, apologize in advance > > The problem I have is that the server translates https addresses, for > example https://webdomain at http://webdomain:443 > > is this logical?, what is my error? > > Thank you very much > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269112,269112#msg-269112 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at high5.alioth.uberspace.de Thu Aug 25 13:36:07 2016 From: mailinglists at high5.alioth.uberspace.de (No Spam) Date: Thu, 25 Aug 2016 15:36:07 +0200 Subject: IMAP /SMTP SSL Reverse PROXY without Authentification Message-ID: <20160825133607.GE6392@jens-ThinkPad-Edge-E145> An embedded message was scrubbed... From: J Subject: IMAP /SMTP SSL Reverse PROXY without Authentification Date: Thu, 25 Aug 2016 15:19:32 +0200 Size: 4339 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From emiel.mols at gmail.com Thu Aug 25 13:44:33 2016 From: emiel.mols at gmail.com (Emiel Mols) Date: Thu, 25 Aug 2016 13:44:33 +0000 Subject: keep-alive to backend + non-idempotent requests = race condition? Message-ID: Hey, I've been haunted by this for quite some time, seen it in different deployments, and think might make for some good ol' mailing list discussion. When - using keep-alive connections to a backend service (eg php, rails, python) - this backend needs to be updatable (it is not okay to have lingering workers for hours or days) - requests are often not idem-potent (can't repeat them) current deployments need to close the kept-alive connection from the backend-side, always opening up a race condition where nginx has just sent a request and the connection gets closed. This leaves nginx in limbo not knowing if the request has been executed and can be repeated. When using keep-alive connections the only reliable way of closing them is from the client-side (in this case: nginx). I would therefor expect either - a feature to signal nginx to close all connections to the backend after having deployed new backend code. - an upstream keepAliveIdleTimeout config value that guarantees that kept-alive connections are not left lingering indefinitely long. If nginx guarantees it closes idle connections after 5 seconds, we can be sure that 5s+max_request_time after a new backend is deployed all old workers are gone. - (variant on the previous) support for a http header from the backend to indicate such a timeout value. It's funny that this header kind-of already exists in the spec < https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#keep-alive >, but in practice is implemented by no-one. The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind implementing myself if someone versed in the architecture would give some pointers. Best regards, - Emiel BTW: a similar issue should exist between browsers and web servers. Since latency is a lot higher on these links, I can only assume it to happen a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Aug 25 19:24:41 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 25 Aug 2016 15:24:41 -0400 Subject: Nginx overwrite existing cookies with add_header Message-ID: <9ce5fdd3df961127c881cd46750f9db8.NginxMailingListEnglish@forum.nginx.org> So i am using Nginx to set a header now my PHP app sets this header too but it sets the cookie with a domain of ".networkflare.com" Nginx keeps setting it as "www.networkflare.com" i need to overwrite the cookie not create a new one. I have tried the following : add_header Set-Cookie "logged_in=1;Path=/;Max-Age=315360000"; That created a new cookie with a domain of "www.networkflare.com" instead of overwriting the previous cookie with a domain of ".networkflare.com" This is also has the same outcome as above. add_header Set-Cookie "logged_in=1;Domain=$host;Path=/;Max-Age=315360000"; Does anyone know how you can overwrite the existing cookie without having to specify the add_header like this. add_header Set-Cookie "logged_in=1;Domain=.networkflare.com;Path=/;Max-Age=315360000"; The reason i can't use this method is because on the Nginx server there are multiple hosts, it is why i try to use $host but would need to remove the www at the start for it to work. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269189,269189#msg-269189 From nginx-forum at forum.nginx.org Thu Aug 25 20:06:08 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 25 Aug 2016 16:06:08 -0400 Subject: Nginx overwrite existing cookies with add_header In-Reply-To: <9ce5fdd3df961127c881cd46750f9db8.NginxMailingListEnglish@forum.nginx.org> References: <9ce5fdd3df961127c881cd46750f9db8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1eb3fa1c732153e52b73515306eab748.NginxMailingListEnglish@forum.nginx.org> I sorted out this problem now Here was my soloution. if ($host ~* www(.*)) { set $host_without_www $1; } add_header Set-Cookie "logged_in=1;Domain=$host_without_www;Path=/;Max-Age=315360000"; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269189,269190#msg-269190 From agentzh at gmail.com Thu Aug 25 22:04:40 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 25 Aug 2016 15:04:40 -0700 Subject: [ANN] OpenResty 1.11.2.1 released Message-ID: Hi folks, I am excited to announce the new formal release, 1.11.2.1, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/en/download.html Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. Also, we now provide official pre-built packages and repositories for CentOS, RHEL, and Fedora. Support for other Linux distributions will come in the near future (contributors welcome!): https://openresty.org/en/linux-packages.html https://openresty.org/en/rpm-packages.html The highlights of this release are: 1. New NGINX 1.11.2 core. 2. The official Lua 5.1 reference manual and LuaJIT 2 documentation have been added to the restydoc documentation index. 3. New ssl_session_fetch_by_lua* and ssl_session_store_by_lua* directives for doing (distributed) nonblocking caching of SSL session data by SSL session IDs for downstream SSL handshakes performed for https requests: https://github.com/openresty/lua-nginx-module#ssl_session_fetch_by_lua_block 4. New Lua API for manipulating user-defined shm-based queues or lists in lua_shared_dict: https://github.com/openresty/lua-nginx-module#ngxshareddictlpush 5. New Lua API in ngx.balancer for setting per-session timeout threshold values for upstream communications in the context of balancer_by_lua*: https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#set_timeouts 6. Better HTTP/2 support in ngx_lua and other OpenResty components. The complete change log since the last (formal) release, 1.9.15.1: * upgraded the Nginx core to 1.11.2. * see the changes here: http://nginx.org/en/CHANGES * feature: bundled the sess_set_get_cb_yield patch for OpenSSL to support the ssl_session_fetch_by_lua* directives of ngx_lua. * win32: we now use pcre 8.39 and openssl 1.0.2h in our official build. * feature: applied the "ssl_pending_session.patch" to the nginx core to support the ssl_session_fetch_by_lua* and ssl_session_store_by_lua* in ngx_lua. * feature: added "/site/lualib/" to the default Lua module search paths used by OpenResty. This location is for 3rd-party Lua modules so that the users will not pollute the "/lualib/" directory with non-standard Lua module files. * feature: now we create the "/bin/openresty" symlink which points to "/nginx/sbin/nginx" to avoid polluting the "PATH" environment with the name "nginx". * feature: added the "upstream_timeout_fields" patch to the nginx core to allow efficient per-request connect/send/read timeout settings for individual upstream requests and retries. * feature: added the official LuaJIT documentation from LuaJIT 2.1 to our "restydoc" indexes. * feature: added the Lua 5.1 reference manual from lua 5.1.5 to our restydoc indexes. * bugfix: special characters like spaces in nginx configure option values (like "--with-pcre-opt" and "--with-openssl-opt") were not passed correctly. thanks Andreas Lubbe for the report. * change: now we use our own version of default "index.html" and "50x.html" pages. * upgraded ngx_lua to 0.10.6. * feature: added new shdict methods: lpush, lpop, rpush, rpop, llen for manipulating list-typed values. these methods can be used in the same way as the redis commands of the same names. Essentially we now have shared memory based queues now. each queue is indexed by a key. thanks Dejiang Zhu for the patch. * feature: implemented ssl_session_fetch_by_lua* and ssl_session_store_by_lua* configuration directives for doing (distributed) caching of SSL sessions (via SSL session IDs) for downstream connections. thanks Zi Lin for the patches. * feature: added pure C API for setting upstream request connect/send/read timeouts in balancer_by_lua* on a per session basis. thanks Jianhao Dai for the original patch. * feature: ssl: add FFI functions to parse certs and private keys to cdata. With the current FFI functions the certificate chain and the private key are parsed from DER every time they are set into the SSL state. Now we can cache the parsed certs and private keys as cdata objects directly. These new functions make it possible to avoid the DER -> OpenSSL parsing. Thanks Alessandro Ghedini for the patch. * feature: shdict:incr(): added the optional "init" argument to allow intializing nonexistent keys with an initial value. * feature: allow tcpsock:setkeepalive() to receive nil args. thanks Thibault Charbonnier for the patch. * bugfix: *_by_lua_file: did not support absolute file paths on non-UNIX systems like Win32. thanks Someguynamedpie for the report and the original patch. * bugfix: fake connections did not carry a proper connection number. thanks Piotr Sikora for the patch. * bugfix: "lua_check_client_abort on" broke HTTP/2 requests. * bugfix: "ngx_http_lua_ffi_ssl_create_ocsp_request": we did not clear the openssl stack errors in the right place. * bugfix: ngx.sha1_bin() was always disabled with nginx 1.11.2+ due to incompatible changes in nginx 1.11.2+. thanks manwe for the report. * bugfix: segfaults might happen when calling ngx.log() in ssl_certificate_by_lua* and error_log was configured with syslog. thanks Jonathan Serafini and Greg Kar?kinian for the report. * bugfix: fixed a typo in the error handling of the "SSL_get_ex_new_index()" call for our ssl ctx index. thanks Jie Chen for the report. * bugfix: when the nginx core does not properly initialize "r->headers_in.headers" (due to 400 bad requests and etc), ngx.req.set_header() and ngx.req.clear_header() might lead to crashes. thanks Marcin Teodorczyk for the report. * bugfix: fixed crashes in ngx.req.raw_header() for HTTP/2 requests. now we always throw out a Lua exception in ngx.req.raw_header() when being called in HTTP/2 requests. * bugfix: specifying the C macro "NGX_LUA_NO_FFI_API" broke the build. thanks jsopenrb for the report. * doc: ngx.worker.count() is available in the init_worker_by_lua* context. * doc: documented that ngx.req.raw_header() does not work in HTTP/2 requests. * doc: typo fixes from Otto Kek?l?inen and Nick Galbreath. * upgraded lua-resty-core to 0.1.8. * updated the "resty.core.shdict" Lua module to reflect the recent addition of list-typed shdict values in ngx_lua. * feature: shdict:incr(): added the optional "init" argument to allow intializing nonexistent keys with an initial value. * feature: added the ngx.ssl.session module for the contexts ssl_session_fetch_by_lua* and ssl_session_store_by_lua*. thanks Zi Lin for the patches. * feature: ngx.balancer: added new API functions set_timeouts() for setting per-session connect/send/read timeouts for the current upstream request and subsequent retries. thanks Jianhao Dai for the patch. * feature: ngx.ssl: add new API functions parse_pem_cert(), parse_pem_priv_key(), set_cert(), and set_priv_key(). thanks Alessandro Ghedini for the patch. * upgraded lua-resty-dns to 0.17. * feature: now we support parsing answer records in all the answer sections ("AN", "NS", and "AR"). thanks Zekai Zheng for the patch. * optimize: commented out 3 lines of useless Lua code in "parse_response()". * upgraded lua-resty-redis to 0.25. * feature: now this module automatically generate Lua methods for *any* Redis commands the caller attempts to use. The lazily generated Lua methods are cached in the Lua module table for faster subsequent uses. In theory, any Redis commands in existing Redis or even future Redis servers can work out of the box. thanks spacewander for the patch. * upgraded ngx_lua_upstream to 0.06. * feature: added upstream.current_upstream_name() to return the proxy target used in the current request. thanks Justin Li for the patch. * minor Lua table initialization optimizations from Scott Francis. * upgraded resty-cli to 0.13. * bugfix: restydoc: pod2man from older perl versions (like 5.8) does not support "-u" option. we should be smarter here. * bugfix: when resty/restydoc/restydoc-index were invoked through symlinks, they might fail to locate the nginx executable of openresty. * bugfix: POD errors might get displayed in pod2man with older versions of perl (like perl 5.20.2). thanks Dominic for the patch. * bugfix: pod2man might abort with a "Can't open file" error with perl 5.24+. * bugfix: restydoc-index: improved the seciton name normalization for the documentation indexes. * upgraded ngx_echo to 0.60. * bugfix: fixed compilation failures when specifying the C compiler option "-DDDEBUG=2". thanks amdei for the report. * bugfix: fixed crashes in $echo_client_request_headers for HTTP/2 requests. thanks dilyanpalauzov for the report. Now $echo_client_request_headers always evaluates to an empty value (not found) in HTTP/2 requests. * doc: make it clearer when to use the "--" form. * upgraded ngx_headers_more to 0.31. * bugfix: when the nginx core does not properly initialize "r->headers_in.headers" (due to 400 bad requests and etc), more_set_input_headers might lead to crashes. thanks Marcin Teodorczyk for the report. * bugfix: fixed a typo in an error message. thanks Albert Strasheim for the patch. * upgraded ngx_set_misc to 0.31. * bugfix: the set_sha1 directive is always disabled when working with nginx 1.11.2+ due to recent changes in the new nginx cores. * upgraded ngx_encrypted_session to 0.06. * doc: we do require ngx_http_ssl_module to work properly. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/en/changelog-1011002.html OpenResty is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org/ Have fun! -agentzh From steve at greengecko.co.nz Fri Aug 26 01:31:11 2016 From: steve at greengecko.co.nz (steve) Date: Fri, 26 Aug 2016 13:31:11 +1200 Subject: nginx-http-concat module Message-ID: <57BF9BDF.6040307@greengecko.co.nz> I know this is a bit off topic, but has anyone got this module to work? I have added concat_types text/css application/x-javascript application/javascript; to nginx.conf, scope http added location /css/ { concat on; concat_max_files 30; } to the server config and the code contains I've also tried every vairation of ?? and /css/ that I can think of, all to no avail. files are in /css as seen from the docroot. Pointers gratefully received. Total hair loss imminent! Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Fri Aug 26 01:52:13 2016 From: steve at greengecko.co.nz (steve) Date: Fri, 26 Aug 2016 13:52:13 +1200 Subject: nginx-http-concat module In-Reply-To: <57BF9BDF.6040307@greengecko.co.nz> References: <57BF9BDF.6040307@greengecko.co.nz> Message-ID: <57BFA0CD.9000303@greengecko.co.nz> Gah... On 08/26/2016 01:31 PM, steve wrote: > I know this is a bit off topic, but has anyone got this module to work? > > I have added > > concat_types text/css application/x-javascript application/javascript; > > to nginx.conf, scope http > > added > > > location /css/ { > concat on; > concat_max_files 30; > } > > to the server config and the code contains > > href="/css/??file1.css,file2.css" /> > > I've also tried every vairation of ?? and /css/ that I can think of, > all to no avail. > > files are in /css as seen from the docroot. > > Pointers gratefully received. Total hair loss imminent! > > Cheers, > > Steve answering my own question. If any of the files are missing, it returns a 404. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailforpps at gmail.com Fri Aug 26 05:19:59 2016 From: mailforpps at gmail.com (phani prasad) Date: Fri, 26 Aug 2016 10:49:59 +0530 Subject: how to completely disable request body buffering Message-ID: Hi all, for one of our products we have chosen nginx as our webserver and using fastCGI to talk to upstream(application) layer. We have a usecase where in the client sends huge payload typically in MB and nginx is quick enough reading all the data and buffering it . Whereas our upstream server is quite slower in consuming the data. This resulted in timeout on client side since the upstream cant respond with status code until unless it finish reading the complete payload. Additional information is, the request is chunked. To address this we have tried several options but nothing worked out so far. 1. we turned off fastcgi_request_buffering setting it to off. This would only allow nginx not to buffer request payload into a temp file before forwarding it to application/upstream. But it still use some buffer chains to write to upstream. 2. setting client_body_buffer_size . this would only check if request body size is larger than client_body_buffer_size, then it writes whole or part of body to file. How does this work in case of chunked request body? What is the max chunk size that nginx can allocate? What if upstream is slow in consuming the data ? Does nginx still try to writev chain of buffers to the pipe? How many max chain buffers nginx would maintain to buffer request body? If so is it configurable? What other options that we can try out? we want to completely disable request body buffering and would want to stream the data as it just arrives from client. and if upstream is busy , *nginx should be able to tune itself in the sense it should wait reading further data from client until upstream is ready to be written.* Any help is much appreciated as this is blocking one of our product certifications. Thanks Prasad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at high5.alioth.uberspace.de Fri Aug 26 05:49:01 2016 From: mailinglists at high5.alioth.uberspace.de (No Spam) Date: Fri, 26 Aug 2016 07:49:01 +0200 Subject: IMAP /SMTP SSL Reverse PROXY without Authentification Message-ID: <20160826054901.GB29056@jens-ThinkPad-Edge-E145> Hi Guys, maybe what I ma going to aks is a dumb question, if so please direct me at the ressources needed; I got the project to setup a IMAP Proxy for our institution, if so please direct me at the ressources needed; I got the project to setup a IMAP Proxy for our institution, as to not expose the Exchange server to the bad bad internet. I suggested using a TCP Proxy, but they want to have different certs (one for the proxy facing the internet and one for the exchange on the intranet ) so this is not exactly possible with a dumb tcp proxy. I would have to terminate the first SSL on the Proxy but still forward anyone to the Server, so the Authentification take place there; and i don't get how i would do that nor do i find ressources ( eg Blogposts ) of people doing a similiar thing; and i would need to do the same thing for the SSL SMTP too, but this is for one part secondary and for the other probably similiar. Sorry for the bad english, Greets J -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From mailinglists at high5.alioth.uberspace.de Fri Aug 26 05:55:39 2016 From: mailinglists at high5.alioth.uberspace.de (No Spam) Date: Fri, 26 Aug 2016 07:55:39 +0200 Subject: IMAP /SMTP SSL Reverse PROXY without Authentification In-Reply-To: <20160826054901.GB29056@jens-ThinkPad-Edge-E145> References: <20160826054901.GB29056@jens-ThinkPad-Edge-E145> Message-ID: <20160826055539.GC29056@jens-ThinkPad-Edge-E145> Sorry for the Double Posting -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Fri Aug 26 06:02:05 2016 From: nginx-forum at forum.nginx.org (xianliang) Date: Fri, 26 Aug 2016 02:02:05 -0400 Subject: add coroutines for nginx Message-ID: <4982847573423056bf31eaabcbe6c588.NginxMailingListEnglish@forum.nginx.org> read file is block, nginx can Using coroutines implementation file to read and write??write log file and read cache file? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269199,269199#msg-269199 From nginx-forum at forum.nginx.org Fri Aug 26 08:52:48 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Fri, 26 Aug 2016 04:52:48 -0400 Subject: nginx-http-concat module In-Reply-To: <57BFA0CD.9000303@greengecko.co.nz> References: <57BFA0CD.9000303@greengecko.co.nz> Message-ID: GreenGecko Wrote: ------------------------------------------------------- > answering my own question. If any of the files are missing, it returns > a > 404. Hello, I've never used nginx-http-concat, but you can disable this behavior with "concat_ignore_file_error on". Found in the doc: https://github.com/alibaba/nginx-http-concat Best Regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269193,269204#msg-269204 From nginx-forum at forum.nginx.org Fri Aug 26 15:01:05 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 26 Aug 2016 11:01:05 -0400 Subject: Nginx | fastcgi_cache_valid dynamic based on request Message-ID: <22b57dec1a3d7aabe7d8a65c5460f033.NginxMailingListEnglish@forum.nginx.org> So I have been trying to make the fastcgi_cache_valid value based on user request. if ($request_uri ~ "/url1" ) { set $cachetime "any 5s"; } if ($request_uri ~ "/url2" ) { set $cachetime "any 5m"; } These did not work because it turns out your not allowed to have a dynamic variable within the fastcgi_cach_valid command. fastcgi_cache_valid $cachetime; fastcgi_cache_valid "$cachetime"; They give of this error. invalid time value "$cachetime" So instead of the above i tried this instead. if ($request_uri ~ "/url1" ) { set $cachetime "5"; } if ($request_uri ~ "/url2" ) { set $cachetime "300"; } add_header "X-Accel-Expires" $cachetime; fastcgi_cache_valid any 60s; But on url1 i get X-Cache: HIT when it should of expired after 5 seconds. Is what i am trying to achieve even possible ? From my understand the X-Accel-Expires might just be for proxy_cache requests. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269212,269212#msg-269212 From reallfqq-nginx at yahoo.fr Fri Aug 26 15:07:06 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 26 Aug 2016 17:07:06 +0200 Subject: keep-alive to backend + non-idempotent requests = race condition? In-Reply-To: References: Message-ID: What about marking the upstream servers you want to update as 'down' in their pool, reloading the configuration (HUP signal, gracefully shutting down old workers), and waiting for the links to those servers to be clear of any activity? ?Then upgrade and reintegrate updated servers in the pool (while disabling others/old version, if needed).? This kind of manual rotation is trivial. --- *B. R.* On Thu, Aug 25, 2016 at 3:44 PM, Emiel Mols wrote: > Hey, > > I've been haunted by this for quite some time, seen it in different > deployments, and think might make for some good ol' mailing list discussion. > > When > > - using keep-alive connections to a backend service (eg php, rails, python) > - this backend needs to be updatable (it is not okay to have lingering > workers for hours or days) > - requests are often not idem-potent (can't repeat them) > > current deployments need to close the kept-alive connection from the > backend-side, always opening up a race condition where nginx has just sent > a request and the connection gets closed. This leaves nginx in limbo not > knowing if the request has been executed and can be repeated. > > When using keep-alive connections the only reliable way of closing them is > from the client-side (in this case: nginx). I would therefor expect either > > - a feature to signal nginx to close all connections to the backend after > having deployed new backend code. > > - an upstream keepAliveIdleTimeout config value that guarantees that > kept-alive connections are not left lingering indefinitely long. If nginx > guarantees it closes idle connections after 5 seconds, we can be sure that > 5s+max_request_time after a new backend is deployed all old workers are > gone. > > - (variant on the previous) support for a http header from the backend to > indicate such a timeout value. It's funny that this header kind-of already > exists in the spec timeout-01.html#keep-alive>, but in practice is implemented by no-one. > > The 2nd and/or 3rd options seem most elegant to me. I wouldn't mind > implementing myself if someone versed in the architecture would give some > pointers. > > Best regards, > > - Emiel > BTW: a similar issue should exist between browsers and web servers. Since > latency is a lot higher on these links, I can only assume it to happen a > lot. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Aug 26 15:16:00 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 26 Aug 2016 17:16:00 +0200 Subject: how to completely disable request body buffering In-Reply-To: References: Message-ID: fastcgi_request_buffering does deactivate request buffering from what I understand from the docs. client_body_buffer_size is said to be useful/used only when the previous directive is activated. >From what I read it seems your configuration attempts failed to load or to be activated where needed. Could you provide us with a minimal loaded configuration reproducing the problem (ie buffering while you configured it not to do so), through the use of nginx -V ? --- *B. R.* On Fri, Aug 26, 2016 at 7:19 AM, phani prasad wrote: > Hi all, > > for one of our products we have chosen nginx as our webserver and using > fastCGI to talk to upstream(application) layer. We have a usecase where in > the client sends huge payload typically in MB and nginx is quick enough > reading all the data and buffering it . Whereas our upstream server is > quite slower in consuming the data. This resulted in timeout on client > side since the upstream cant respond with status code until unless it > finish reading the complete payload. Additional information is, the request > is chunked. > > To address this we have tried several options but nothing worked out so > far. > > 1. we turned off fastcgi_request_buffering setting it to off. > > This would only allow nginx not to buffer request payload into a temp file > before forwarding it to application/upstream. But it still use some buffer > chains to write to upstream. > > 2. setting client_body_buffer_size . > > this would only check if request body size is larger than > client_body_buffer_size, then it writes whole or part of body to file. > How does this work in case of chunked request body? > What is the max chunk size that nginx can allocate? > What if upstream is slow in consuming the data ? Does nginx still try to > writev chain of buffers to the pipe? > How many max chain buffers nginx would maintain to buffer request body? If > so is it configurable? > > > What other options that we can try out? we want to completely disable > request body buffering and would want to stream the data as it just arrives > from client. and if upstream is busy , *nginx should be able to tune > itself in the sense it should wait reading further data from client until > upstream is ready to be written.* > > Any help is much appreciated as this is blocking one of our product > certifications. > > > Thanks > Prasad. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Aug 26 15:36:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Aug 2016 18:36:23 +0300 Subject: Nginx | fastcgi_cache_valid dynamic based on request In-Reply-To: <22b57dec1a3d7aabe7d8a65c5460f033.NginxMailingListEnglish@forum.nginx.org> References: <22b57dec1a3d7aabe7d8a65c5460f033.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160826153623.GI24741@mdounin.ru> Hello! On Fri, Aug 26, 2016 at 11:01:05AM -0400, c0nw0nk wrote: > So I have been trying to make the fastcgi_cache_valid value based on user > request. > > if ($request_uri ~ "/url1" ) { > set $cachetime "any 5s"; > } > if ($request_uri ~ "/url2" ) { > set $cachetime "any 5m"; > } > > These did not work because it turns out your not allowed to have a dynamic > variable within the fastcgi_cach_valid command. > fastcgi_cache_valid $cachetime; Exactly, the fastcgi_cache_valid doesn't support variables and that's why it won't work. > So instead of the above i tried this instead. > > if ($request_uri ~ "/url1" ) { > set $cachetime "5"; > } > if ($request_uri ~ "/url2" ) { > set $cachetime "300"; > } > add_header "X-Accel-Expires" $cachetime; > > fastcgi_cache_valid any 60s; > > But on url1 i get X-Cache: HIT when it should of expired after 5 seconds. For X-Accel-Expires to work, it must be from an upstream server. That is, using add_header will work if done on your backend, but not in the nginx caching configuration. > Is what i am trying to achieve even possible ? From my understand the > X-Accel-Expires might just be for proxy_cache requests. Use separate locations instead, i.e.: location /url1 { fastcgi_cache_valid any 5s; ... } location /url2 { fastcgi_cache_valid any 5m; ... } Locations were specifically designed to make distinct URI-based configurations easy and efficient. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Aug 26 18:17:08 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 26 Aug 2016 14:17:08 -0400 Subject: Nginx | fastcgi_cache_valid dynamic based on request In-Reply-To: <20160826153623.GI24741@mdounin.ru> References: <20160826153623.GI24741@mdounin.ru> Message-ID: <09e4366d10999c7e0c56d90e7cee1bcc.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Aug 26, 2016 at 11:01:05AM -0400, c0nw0nk wrote: > > > So I have been trying to make the fastcgi_cache_valid value based on > user > > request. > > > > if ($request_uri ~ "/url1" ) { > > set $cachetime "any 5s"; > > } > > if ($request_uri ~ "/url2" ) { > > set $cachetime "any 5m"; > > } > > > > These did not work because it turns out your not allowed to have a > dynamic > > variable within the fastcgi_cach_valid command. > > fastcgi_cache_valid $cachetime; > > Exactly, the fastcgi_cache_valid doesn't support variables and > that's why it won't work. > > > So instead of the above i tried this instead. > > > > if ($request_uri ~ "/url1" ) { > > set $cachetime "5"; > > } > > if ($request_uri ~ "/url2" ) { > > set $cachetime "300"; > > } > > add_header "X-Accel-Expires" $cachetime; > > > > fastcgi_cache_valid any 60s; > > > > But on url1 i get X-Cache: HIT when it should of expired after 5 > seconds. > > For X-Accel-Expires to work, it must be from an upstream server. > That is, using add_header will work if done on your backend, but > not in the nginx caching configuration. > > > Is what i am trying to achieve even possible ? From my understand > the > > X-Accel-Expires might just be for proxy_cache requests. > > Use separate locations instead, i.e.: > > location /url1 { > fastcgi_cache_valid any 5s; > ... > } > > location /url2 { > fastcgi_cache_valid any 5m; > ... > } > > Locations were specifically designed to make distinct URI-based > configurations easy and efficient. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks for the information i don't use proxy_pass and the setup of php is as shown here. https://docs.joomla.org/Nginx But i think based of what you said i should be able to add that same "X-Accel-Expires" header to my PHP output itself and that will achieve it. Might even be better since i can define that header on specific pages. I will give it a try adding it to my PHP script and post back if it works or not. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269212,269218#msg-269218 From nginx-forum at forum.nginx.org Fri Aug 26 18:43:04 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 26 Aug 2016 14:43:04 -0400 Subject: Nginx | fastcgi_cache_valid dynamic based on request In-Reply-To: <09e4366d10999c7e0c56d90e7cee1bcc.NginxMailingListEnglish@forum.nginx.org> References: <20160826153623.GI24741@mdounin.ru> <09e4366d10999c7e0c56d90e7cee1bcc.NginxMailingListEnglish@forum.nginx.org> Message-ID: It works by adding a X-Accel-Expires header to my php output what the fastcgi_cache will follow what also then means if i use proxy_cache it would follow it too :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269212,269219#msg-269219 From nginx-forum at forum.nginx.org Fri Aug 26 18:50:15 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Fri, 26 Aug 2016 14:50:15 -0400 Subject: internal location keepalive_requests issue Message-ID: <03d332507b688e5d89b668075d06bc41.NginxMailingListEnglish@forum.nginx.org> location /hls { error_page 404 = @hls; keepalive_requests 1000; } location @hls { # Serve HLS fragments types { application/vnd.apple.mpegurl m3u8; video/mp2t ts; } root /tmp; add_header Cache-Control no-cache; keepalive_requests 1000; } keepalive_requests must be large enough in this two location meanwhile if set keepalive_requests to 0 or 1 in /hls or @hls, keepalive_requests would not work in the other location Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269220,269220#msg-269220 From mdounin at mdounin.ru Fri Aug 26 20:09:51 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Aug 2016 23:09:51 +0300 Subject: internal location keepalive_requests issue In-Reply-To: <03d332507b688e5d89b668075d06bc41.NginxMailingListEnglish@forum.nginx.org> References: <03d332507b688e5d89b668075d06bc41.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160826200951.GL24741@mdounin.ru> Hello! On Fri, Aug 26, 2016 at 02:50:15PM -0400, crasyangel wrote: > location /hls { > error_page 404 = @hls; > keepalive_requests 1000; > } > > location @hls { > # Serve HLS fragments > types { > application/vnd.apple.mpegurl m3u8; > video/mp2t ts; > } > root /tmp; > add_header Cache-Control no-cache; > keepalive_requests 1000; > } > > keepalive_requests must be large enough in this two location meanwhile > if set keepalive_requests to 0 or 1 in /hls or @hls, keepalive_requests > would not work in the other location This is expected behaviour. Keepalive is switched off for a request once a location selected does not allow keepalive for the request in question (due to keepalive_requests, keepalive_timeout, or keepalive_disable). Even if request processing is later moved to a different location with less strict settings, keepalive remains disabled. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sat Aug 27 03:15:38 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Fri, 26 Aug 2016 23:15:38 -0400 Subject: internal location keepalive_requests issue In-Reply-To: <20160826200951.GL24741@mdounin.ru> References: <20160826200951.GL24741@mdounin.ru> Message-ID: <4de55096393c22434649291da26e6cb5.NginxMailingListEnglish@forum.nginx.org> if (r->keepalive) { if (clcf->keepalive_timeout == 0) { r->keepalive = 0; } else if (r->connection->requests >= clcf->keepalive_requests) { r->keepalive = 0; } else if (r->headers_in.msie6 && r->method == NGX_HTTP_POST && (clcf->keepalive_disable & NGX_HTTP_KEEPALIVE_DISABLE_MSIE6)) { /* * MSIE may wait for some time if an response for * a POST request was sent over a keepalive connection */ r->keepalive = 0; } else if (r->headers_in.safari && (clcf->keepalive_disable & NGX_HTTP_KEEPALIVE_DISABLE_SAFARI)) { /* * Safari may send a POST request to a closed keepalive * connection and may stall for some time, see * https://bugs.webkit.org/show_bug.cgi?id=5760 */ r->keepalive = 0; } } Note r->keepalive only effect that Connection filed in response header and set keepalive timer when finalize request Why place this code block in ngx_http_update_location_config? Would be better place it when set keepalive timer? And maybe ngx send "Connection: keep-alive", but close connection in present 1.10.1, and this would be nothing seriously So why is it the expected behaviour? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269220,269224#msg-269224 From ihsan at dogan.ch Sat Aug 27 08:38:27 2016 From: ihsan at dogan.ch (=?UTF-8?B?xLBoc2FuwqBEb8SfYW4=?=) Date: Sat, 27 Aug 2016 10:38:27 +0200 Subject: Location Alias not working Message-ID: Hi, I've defined a location alias in my nginx.conf: server { listen 80; server_name example.org www.example.org; [...] location /foo/ { alias /var/www/foo/; } } Even the directory /var/www/foo exists, Nginx is returns a 404. As I understand, the configuration is right, but I can't see what's wrong. Any ideas? Ihsan -- ihsan at dogan.ch http://blog.dogan.ch/ From nginx-forum at forum.nginx.org Sat Aug 27 13:14:26 2016 From: nginx-forum at forum.nginx.org (vasilevich) Date: Sat, 27 Aug 2016 09:14:26 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: References: Message-ID: <4abf88e79d76c9615a1951cf73e1b7d7.NginxMailingListEnglish@forum.nginx.org> well your formatter doesn't work anymore, here is a better one: https://nginxformatter.com and if you want to run it locally and/or view the source: https://github.com/vasilevich/nginxbeautifier available as a npm package. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,250211,269228#msg-269228 From reallfqq-nginx at yahoo.fr Sat Aug 27 13:25:02 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 27 Aug 2016 15:25:02 +0200 Subject: No HTTPS on nginx.org by default In-Reply-To: <4c61a298-69a5-0478-b790-5310ce89ffdd@nginx.com> References: <2626990.y0zAfJzpm4@vbart-workstation> <947f46c8-b02a-b063-03b7-cc369e0da8b9@nginx.com> <09744042-0580-1931-d315-4c805b3e0516@nginx.com> <4c61a298-69a5-0478-b790-5310ce89ffdd@nginx.com> Message-ID: No one is and nor anyone has to be. Maybe less peremptory abrupt answers the next time someone points out a potential problem and no hard words about despotism when views are shared might help? :o) Thanks for having taken the necessary time on this. Keep up the good work! No hard feelings. --- *B. R.* On Thu, Aug 25, 2016 at 10:44 AM, Maxim Konovalov wrote: > On 8/24/16 10:59 PM, B.R. wrote: > > HTTPS was supported, but internal links were systematically served > > over HTTP. > > Right -- this happens because long time nginx.org was HTTP only. > I agree, that here are still some leftovers that should be fixed. > > I am sorry that we are not perfect. > > > Without considering any religion, this problem is now fixed. > > > > As per your political decision on serving content (un)encrypted, it > > is /in fine/ your choice and it has been noted. > > Power users already knew about HTTPS anyway. > > > > Thanks Maxim. > > --- > > *B. R.* > > > > On Wed, Aug 24, 2016 at 1:58 PM, Maxim Konovalov > > wrote: > > > > On 8/22/16 8:30 PM, Maxim Konovalov wrote: > > > On 8/22/16 8:23 PM, Richard Stanway wrote: > > >> See https://nginx.org/en/linux_packages.html#stable > > > > >> > > >> PGP key links are hard coded to http URLs: > > >> > > >>

> > >> For Debian/Ubuntu, in order to authenticate the nginx repository > > >> signature > > >> and to eliminate warnings about missing PGP key during > installation > > >> of the > > >> nginx package, it is necessary to add the key used to sign the > nginx > > >> packages and repository to the apt program keyring. > > >> Please download this > > >> key from our web site, and add it to the apt > > >> program keyring with the following command: > > >>

> > >> > > > Yes, I see. It should be fixed. Thanks. > > > > > Fixed now. Thanks. > > > > -- > > Maxim Konovalov > > Join us at nginx.conf, Sept. 7-9, Austin, TX: > > http://nginx.com/nginxconf > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Maxim Konovalov > Join us at nginx.conf, Sept. 7-9, Austin, TX: http://nginx.com/nginxconf > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at trushkin.ru Sat Aug 27 14:20:21 2016 From: i at trushkin.ru (Lali Avlokhova) Date: Sat, 27 Aug 2016 17:20:21 +0300 Subject: have you seen that stuff? Message-ID: <000044867ea6$1dcff69f$2c3f17b6$@trushkin.ru> Hey, Look what I've just found! Have you already seen that great stuff? Check it out Yours truly, Lali Avlokhova -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Aug 27 19:02:16 2016 From: nginx-forum at forum.nginx.org (romkaltu) Date: Sat, 27 Aug 2016 15:02:16 -0400 Subject: Nginx real_ip module doesn't work in some conditions Message-ID: So I have Nginx proxy and some servers running behind it. I need to know real users IP not proxy, so I using real_ip module. Everything is working as expected, but if I configure vhost like subdomain.domain.com backend getting Nginx proxy IP. Here is my Nginx config sample set_real_ip_from 192.168.2.0/24; real_ip_header X-Forwarded-For; real_ip_recursive on; upstream srv1 { server 192.168.2.12:80; } server { listen 80; server_name dev.somedomain.com; location / { proxy_pass http://srv1; } } server { listen 80; server_name somedomain.com; location / { proxy_pass http://srv1; } } So if I go to somedomain.com backend receiving real IP, no problems here. But for dev.somedomain.com backend receiving proxy IP! And this is only shortened example, same situation with different domains and subdomains... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269232,269232#msg-269232 From francis at daoine.org Sat Aug 27 20:50:01 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Aug 2016 21:50:01 +0100 Subject: Location Alias not working In-Reply-To: References: Message-ID: <20160827205001.GT12280@daoine.org> On Sat, Aug 27, 2016 at 10:38:27AM +0200, ?hsan?Do?an wrote: Hi there, > I've defined a location alias in my nginx.conf: > location /foo/ { > alias /var/www/foo/; > } > Even the directory /var/www/foo exists, Nginx is returns a 404. As I > understand, the configuration is right, but I can't see what's wrong. It works for me. What specific test request do you make? What file on the filesystem do you want nginx to return? What does your error.log say about the 404 response? (Is there another location{} block that handles your request, instead of the one you showed here?) Cheers, f -- Francis Daly francis at daoine.org From andrew.holway at otternetworks.de Sat Aug 27 21:18:04 2016 From: andrew.holway at otternetworks.de (Andrew Holway) Date: Sat, 27 Aug 2016 23:18:04 +0200 Subject: Location Alias not working In-Reply-To: <20160827205001.GT12280@daoine.org> References: <20160827205001.GT12280@daoine.org> Message-ID: Hi Francis, Can you post your full config pls? Thanks, Andrew On Sat, Aug 27, 2016 at 10:50 PM, Francis Daly wrote: > On Sat, Aug 27, 2016 at 10:38:27AM +0200, ?hsan Do?an wrote: > > Hi there, > > > I've defined a location alias in my nginx.conf: > > > location /foo/ { > > alias /var/www/foo/; > > } > > > Even the directory /var/www/foo exists, Nginx is returns a 404. As I > > understand, the configuration is right, but I can't see what's wrong. > > It works for me. > > What specific test request do you make? > > What file on the filesystem do you want nginx to return? > > What does your error.log say about the 404 response? > > (Is there another location{} block that handles your request, instead > of the one you showed here?) > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Otter Networks UG http://otternetworks.de fon: +49 30 54 88 5197 Gotenstra?e 17 10829 Berlin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Aug 27 21:31:12 2016 From: nginx-forum at forum.nginx.org (vasilevich) Date: Sat, 27 Aug 2016 17:31:12 -0400 Subject: Pretty printer for the Nginx config? In-Reply-To: <4abf88e79d76c9615a1951cf73e1b7d7.NginxMailingListEnglish@forum.nginx.org> References: <4abf88e79d76c9615a1951cf73e1b7d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0b8f5919ab214a5f3abe4f6c7cb40b83.NginxMailingListEnglish@forum.nginx.org> oops, I ment https://nginxbeautifier.com Posted at Nginx Forum: https://forum.nginx.org/read.php?2,250211,269236#msg-269236 From francis at daoine.org Sat Aug 27 22:52:02 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 27 Aug 2016 23:52:02 +0100 Subject: Location Alias not working In-Reply-To: References: <20160827205001.GT12280@daoine.org> Message-ID: <20160827225202.GU12280@daoine.org> On Sat, Aug 27, 2016 at 11:18:04PM +0200, Andrew Holway wrote: Hi there, > Can you post your full config pls? == server { listen 8080; server_name x1; location /foo/ { alias /var/www/foo/; } } == curl -v -H Host:x1 http://127.0.0.1:8080/foo/a gives a 200 with the contents of the file /var/www/foo/a curl -v -H Host:x1 http://127.0.0.1:8080/foo/b gives a 404, error.log says open() "/var/www/foo/b" failed (2: No such file or directory), client: 127.0.0.1, server: x1, request: "GET /foo/b HTTP/1.1", host: "x1" (The events{} and surrounding http{} blocks exist in the config too.) f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Aug 27 23:05:02 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 28 Aug 2016 00:05:02 +0100 Subject: Nginx real_ip module doesn't work in some conditions In-Reply-To: References: Message-ID: <20160827230502.GV12280@daoine.org> On Sat, Aug 27, 2016 at 03:02:16PM -0400, romkaltu wrote: Hi there, > So I have Nginx proxy and some servers running behind it. I need to know > real users IP not proxy, so I using real_ip module. Everything is working as > expected, but if I configure vhost like subdomain.domain.com backend getting > Nginx proxy IP. It seems to work for me. I suppose it is worth making clear: the real_ip module can make some internal-to-nginx things think that the connection to nginx actually came from an address different from what it really was. Does that match what you want the module to do? As in: what, specifically, do you mean by "backend getting Nginx proxy IP"? The connection from nginx to the backend will always[*] come from the nginx IP. [*] there is a configurable exception; but if you don't know that you are using it, you are not using it. It needs extra configuration outside of nginx. > Here is my Nginx config sample > > set_real_ip_from 192.168.2.0/24; > real_ip_header X-Forwarded-For; > real_ip_recursive on; > > upstream srv1 { server 192.168.2.12:80; } > > server { > listen 80; > server_name dev.somedomain.com; > > location / { > proxy_pass http://srv1; > } > } > So if I go to somedomain.com backend receiving real IP, no problems here. > But for dev.somedomain.com backend receiving proxy IP! Can you give one specific example of what you mean by this? When I use a config like this, I see no relevant difference in what the backend gets between requests to somedomain.com and dev.somedomain.com -- the connection comes from the nginx address, and includes an X-Forwarded-For header only if the original request included one. > And this is only shortened example, same situation with different domains > and subdomains... I don't see any problem when using the shortened example. Can you describe more exactly what you see that is not what you want to see? Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Aug 28 08:13:23 2016 From: nginx-forum at forum.nginx.org (romkaltu) Date: Sun, 28 Aug 2016 04:13:23 -0400 Subject: Nginx real_ip module doesn't work in some conditions In-Reply-To: <20160827230502.GV12280@daoine.org> References: <20160827230502.GV12280@daoine.org> Message-ID: <3916daf5c1ef1cb2520b40665e86b229.NginxMailingListEnglish@forum.nginx.org> Ok so it seams my mistake, everything is fine with this configuration, I need extra step in backend to complete my task, as for specific example I needed to set option in backend server (litespeed web server) option "Use Client IP in Header" I must be accidentally disabled it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269232,269241#msg-269241 From francis at daoine.org Sun Aug 28 08:50:43 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 28 Aug 2016 09:50:43 +0100 Subject: Nginx real_ip module doesn't work in some conditions In-Reply-To: <3916daf5c1ef1cb2520b40665e86b229.NginxMailingListEnglish@forum.nginx.org> References: <20160827230502.GV12280@daoine.org> <3916daf5c1ef1cb2520b40665e86b229.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160828085043.GW12280@daoine.org> On Sun, Aug 28, 2016 at 04:13:23AM -0400, romkaltu wrote: Hi there, > Ok so it seams my mistake, everything is fine with this configuration, I Good that you have it working. > need extra step in backend to complete my task, as for specific example I > needed to set option in backend server (litespeed web server) option "Use > Client IP in Header" I must be accidentally disabled it. Yes - if what you want is "the back-end is able to identify the IP address of the client connecting to nginx", you don't need real_ip at all; you just tell nginx to set a header like X-Forwarded-For, and tell your backend to believe whatever nginx set in that header. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Aug 28 12:22:08 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Sun, 28 Aug 2016 08:22:08 -0400 Subject: How to disable request pipelining on nginx upstream Message-ID: Hi, Does anyone know a way to disable HTTP request pipelining on a same upstream backend connection? Let's say we have the below upstream backend that is configured with keepalive and no connection close: upstream http_backend { server 127.0.0.1:8080; keepalive 10; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } According to this configuration: NGINX sets the maximum number of 10 idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. The question I have is: how can we disable NGIX to pipeline multiple HTTP requests on the same upstream keepalive connection? I would like to keep the upstream keepalive but just disable pipelining. Please let me know how we could do that. Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269248#msg-269248 From mdounin at mdounin.ru Sun Aug 28 13:51:56 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 28 Aug 2016 16:51:56 +0300 Subject: How to disable request pipelining on nginx upstream In-Reply-To: References: Message-ID: <20160828135156.GA1855@mdounin.ru> Hello! On Sun, Aug 28, 2016 at 08:22:08AM -0400, hkahlouche wrote: > Does anyone know a way to disable HTTP request pipelining on a same upstream > backend connection? Pipelining is not used by nginx, there is no need to disable anything. -- Maxim Dounin http://nginx.org/ From chufuyuan at live.cn Sun Aug 28 13:47:40 2016 From: chufuyuan at live.cn (=?UTF-8?B?6KSa5aSr5YWD?=) Date: Sun, 28 Aug 2016 21:47:40 +0800 Subject: multi server addresses appear in variable $upstream_addr which was not supposed to Message-ID: Hello! It is pretty strange that there was two upstream servers in $upstream_addr. It seems two servers received request and both had a respose. Client received the response which took less duration time, I guess. Am I right ? What is stranger is the name of upstream server group appears in $upstream_addr. I don't know the consequence in such situation. Any explain ? *More important is how to find the "spirit" behind this weird problem and fix it.* Thanks a lot ! ! Nginx Version: nginx/1.6.2 OS: CentOS release 6.5 (Final) x86_64 Here is access log below: client_ip 26/Aug/2016:15:45:06 +0800 GET host_ip 80 10.10.10.10:8080, backendservice /backend/api/v1.0/bowner/1123223 HTTP/1.1 502 551 - UA:Java/1.8.0_60 - 0.139 0.139, 0.000 client_ip 26/Aug/2016:15:45:12 +0800 GET host_ip 80 10.10.10.10:8080, backendservice /backend/api/v1.0/bowner/1123822 HTTP/1.1 502 551 - UA:Java/1.8.0_60 - 0.070 0.070, 0.000 client_ip 26/Aug/2016:15:45:17 +0800 POST host_ip 80 10.10.10.10:8080, 10.10.10.11:8080 /backend/api/v1.0/bowner/1133782 HTTP/1.1 200 129 - UA:Java/1.8.0_60 - 0.124 0.043, 0.081 * Backend API /backend/api/v1.0/bowner/{request_id} is supposed to be processed by one server. One server will be return 500 code if many backend servers received same request at (almost) same time. Here is upstream configuration: upstream backendservice { server 10.10.10.10:8080weight=15; server 10.10.10.11:8080 weight=13; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihsan at dogan.ch Sun Aug 28 16:59:45 2016 From: ihsan at dogan.ch (=?utf-8?B?xLBoc2FuIERvxJ9hbg==?=) Date: Sun, 28 Aug 2016 18:59:45 +0200 Subject: Location Alias not working In-Reply-To: <20160827205001.GT12280@daoine.org> References: <20160827205001.GT12280@daoine.org> Message-ID: <20160828165945.GA56493@dogan.ch> Hi, On Saturday, 27 Aug 2016 21:50 +0100, Francis Daly wrote: > > I've defined a location alias in my nginx.conf: > > > location /foo/ { > > alias /var/www/foo/; > > } > > > Even the directory /var/www/foo exists, Nginx is returns a 404. As I > > understand, the configuration is right, but I can't see what's wrong. > As requested, I've attached the full configuration file. > What specific test request do you make? GET /foo/ with curl: $ curl https://foo.bar.com/foo/ > What file on the filesystem do you want nginx to return? UFS (FreeBSD). > What does your error.log say about the 404 response? 2016/08/28 18:53:32 [error] 22231#0: *11704 open() "/usr/local/www/foo.bar.com404" failed (2: No such file or directory), client: 2a02:168:9800::50, server: foo.bar.com, request: "GET /foo/ HTTP/1.1", host: "foo.bar.com" > (Is there another location{} block that handles your request, instead > of the one you showed here?) Not that I'm aware of. -Ihsan -- ihsan at dogan.ch http://blog.dogan.ch/ -------------- next part -------------- worker_processes 2; error_log /var/log/nginx/error.log; events { worker_connections 1024; use kqueue; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; keepalive_timeout 100; server_tokens off; charset UTF-8; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript application/xml application/xml+rss text/javascript; gzip_buffers 4 32k; gzip_disable "MSIE [1-6].(?!.*SV1)"; client_max_body_size 10G; large_client_header_buffers 4 32k; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don?t use SSLv3 ref: POODLE ssl_ciphers ECDH+CHACHA20:ECDH+AESGCM:EDH+AESGCM:ECDH+AES256:ECDH+AES128:EDH+AES:!ADH:!AECDH:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_dhparam /usr/local/etc/nginx/dhparam.pem; ssl_ecdh_curve secp384r1; ssl_stapling on; ssl_stapling_verify off; log_format ssl_custom '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$gzip_ratio ' '$ssl_protocol:$ssl_cipher'; server { listen 80; listen [::]:80; server_name foo.bar.com; return 301 https://$server_name$request_uri; # enforce https } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name foo.bar.com; access_log /var/log/nginx/foo.bar.com/access.log ssl_custom; ssl_certificate server.crt; ssl_certificate_key server.key; root /usr/local/www/foo/bar; fastcgi_buffers 64 4K; rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect; rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect; rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect; index index.php index.html; location = /robots.txt { allow all; log_not_found off; } location ~ /owncloud/(?:\.htaccess|data|config|db_structure\.xml|README) { deny all; } location ~ ^/usr/local/www/foo.bar.com/(owncloud/)?data { internal; root /; } location ~ ^/var/tmp/oc-noclean/.+$ { internal; root /; } location / { # The following 2 rules are only needed with webfinger rewrite ^/.well-known/host-meta/public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json/public.php?service=host-meta-json last; rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; rewrite ^/apps/calendar/caldav.php /remote.php/caldav/ last; rewrite ^/apps/contacts/carddav.php /remote.php/carddav/ last; rewrite ^/apps/([^/]*)/(.*\.(css|php))$/index.php?app=$1&getfile=$2 last; rewrite ^(/core/doc/[^\/]+/)$ $1/index.html; try_files $uri $uri/ index.php; } location ~ /pydio/conf/ { deny all; } location ~ /pydio/data/ { deny all; } location ~ ^(.+?\.php)(/.*)?$ { try_files $1 = 404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$1; fastcgi_param PATH_INFO $2; fastcgi_param HTTPS on; fastcgi_param MOD_X_ACCEL_REDIRECT_ENABLED on; fastcgi_read_timeout 3600; fastcgi_pass unix:/tmp/php-fpm.sock; } # Optional: set long EXPIRES header on static assets location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ { expires 30d; } location /mozsync/ { rewrite ^/mozsync(.+)$ $1 break; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://unix:/tmp/mozsync.sock; } location /chive/ { try_files $uri chive/$uri/ /chive/index.php?q=$uri&$args; } location /foo/ { alias /var/www/foo/; } } } From chris at cretaforce.gr Sun Aug 28 17:00:59 2016 From: chris at cretaforce.gr (Christos Chatzaras) Date: Sun, 28 Aug 2016 20:00:59 +0300 Subject: disable .php files uploads using php (php-fpm) Message-ID: <48351698-3BB4-463D-998E-F4C757F0FA73@cretaforce.gr> Is any way to get the body of a php post upload to match using regex the filename of a php upload? I want to block file uploads with .php extension. I found that I can do it with nasxi but I want to see if I can avoid it. From me at myconan.net Sun Aug 28 17:49:13 2016 From: me at myconan.net (Edho Arief) Date: Mon, 29 Aug 2016 02:49:13 +0900 Subject: Location Alias not working In-Reply-To: <20160828165945.GA56493@dogan.ch> References: <20160827205001.GT12280@daoine.org> <20160828165945.GA56493@dogan.ch> Message-ID: <1472406553.1475210.708472057.0804B11C@webmail.messagingengine.com> Hi, On Mon, Aug 29, 2016, at 01:59, ?hsan Do?an wrote: > Hi, > > On Saturday, 27 Aug 2016 21:50 +0100, Francis Daly wrote: > > > > I've defined a location alias in my nginx.conf: > > > > > location /foo/ { > > > alias /var/www/foo/; > > > } > > > > > Even the directory /var/www/foo exists, Nginx is returns a 404. As I > > > understand, the configuration is right, but I can't see what's wrong. > > > > As requested, I've attached the full configuration file. > > > What specific test request do you make? > > GET /foo/ with curl: > $ curl https://foo.bar.com/foo/ > It tries getting the index, which then handled by your php block which doesn't have the alias set. Also you have an extra space somewhere in your config. From reallfqq-nginx at yahoo.fr Sun Aug 28 17:53:13 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 28 Aug 2016 19:53:13 +0200 Subject: multi server addresses appear in variable $upstream_addr which was not supposed to In-Reply-To: References: Message-ID: The docs say multiple servers might be contacted: $upstream_addr Upstream groups are used to tolerate errors on some responses (and replay the request somewhere else): upstream --- *B. R.* On Sun, Aug 28, 2016 at 3:47 PM, ??? wrote: > Hello! > > It is pretty strange that there was two upstream servers in > $upstream_addr. It seems two servers received request and both had a > respose. Client received the response which took less duration time, I > guess. Am I right ? > > What is stranger is the name of upstream server group appears in > $upstream_addr. I don't know the consequence in such situation. Any explain > ? > > *More important is how to find the "spirit" behind this weird problem > and fix it.* > > Thanks a lot ! ! > > Nginx Version: nginx/1.6.2 > OS: CentOS release 6.5 (Final) x86_64 > > Here is access log below: > > client_ip 26/Aug/2016:15:45:06 +0800 GET host_ip 80 > *10.10.10.10:8080 , backendservice * > /backend/api/v1.0/bowner/1123223 HTTP/1.1 502 551 - > UA:Java/1.8.0_60 - 0.139 0.139, 0.000 > > client_ip 26/Aug/2016:15:45:12 +0800 GET host_ip 80 > *10.10.10.10:8080 , backendservice* > /backend/api/v1.0/bowner/1123822 HTTP/1.1 502 551 - > UA:Java/1.8.0_60 - 0.070 0.070, 0.000 > > client_ip 26/Aug/2016:15:45:17 +0800 POST host_ip 80 > *10.10.10.10:8080 , 10.10.10.11:8080 > * /backend/api/v1.0/bowner/1133782 > HTTP/1.1 200 129 - UA:Java/1.8.0_60 - > 0.124 0.043, 0.081 > > > * Backend API /backend/api/v1.0/bowner/{request_id} is supposed to be > processed by one server. One server will be return 500 code if many backend > servers received same request at (almost) same time. > > Here is upstream configuration: > > > upstream *backendservice* { > server *10.10.10.10:8080 *weight=15; > server *10.10.10.11:8080 * weight=13; > } > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Aug 28 23:19:11 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 29 Aug 2016 00:19:11 +0100 Subject: Location Alias not working In-Reply-To: <20160828165945.GA56493@dogan.ch> References: <20160827205001.GT12280@daoine.org> <20160828165945.GA56493@dogan.ch> Message-ID: <20160828231911.GX12280@daoine.org> On Sun, Aug 28, 2016 at 06:59:45PM +0200, ?hsan Do?an wrote: > On Saturday, 27 Aug 2016 21:50 +0100, Francis Daly wrote: Hi there, Edho Arief is correct; what you see is due to the "index" directive and the subsequent "php" subrequest. > > What specific test request do you make? > > GET /foo/ with curl: > $ curl https://foo.bar.com/foo/ > > > What file on the filesystem do you want nginx to return? > > UFS (FreeBSD). My question was unclear. What is the output you want to get, instead of the 404? Do you want the content of the file /var/www/foo/index.html? Do you want the php-processed output of the file /var/www/foo/index.php? Do you want something else? > > What does your error.log say about the 404 response? > > 2016/08/28 18:53:32 [error] 22231#0: *11704 open() > "/usr/local/www/foo.bar.com404" failed (2: No such file or > directory), client: 2a02:168:9800::50, server: foo.bar.com, > request: "GET /foo/ HTTP/1.1", host: "foo.bar.com" That is curious output. I do not understand how your config can lead to that output. > server_name foo.bar.com; > root /usr/local/www/foo/bar; > index index.php index.html; > location / { > location ~ ^(.+?\.php)(/.*)?$ { > try_files $1 = 404; > location /foo/ { > alias /var/www/foo/; > } The request for /foo/ is handled in the last location there. It's a directory, so the "index" directive is checked, and the file "index.php" exists, so there is an internal rewrite to /foo/index.php (If the file index.php did not exist, nginx would check for the existence of index.html, and then issue an internal rewrite to /foo/index.html. /foo/index.html would be handled in the /foo/ location, and would try to send the file /var/www/foo/index.html) The request /foo/index.php is handled in the "~ php" location, which says (in this case) "try_files /foo/index.php = 404"; the root directory here is /usr/local/www/foo/bar /usr/local/www/foo/bar/foo/index.php does not exist; /usr/local/www/foo/bar= does not exist, so there is an internal rewrite to the request 404. The request 404 is not handled in any of your locations, so it uses the implicit catch-all and looks for the file /usr/local/www/foo/bar404. Which does not exist, and the error.log should show that. A few things can be changed here: * nginx does not act as you expect as regards the internal rewrite for a directory-request. I suspect that nginx is not going to change on that front. * your "try_files" must end in an argument "=404" if you want it to raise a 404 error. "= 404" is not the same; it *does* lead to a 404 in your case, but only after more processing and for a different reason. * top-level regex locations are usually unwise. One reason is that an (implicit) internal rewrite may be processed in a location you did not expect. To fix things, you will want to decide how each request should be handled; I suspect that the easiest will be if you start by moving the current top-level regex locations to within other locations where possible. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Aug 29 03:22:50 2016 From: nginx-forum at forum.nginx.org (serendipity30) Date: Sun, 28 Aug 2016 23:22:50 -0400 Subject: Request Compression In-Reply-To: <7fd6efff0272640cda5912457a8c1938.NginxMailingListEnglish@forum.nginx.org> References: <7fd6efff0272640cda5912457a8c1938.NginxMailingListEnglish@forum.nginx.org> Message-ID: Anyone has used this? Is gzip_static used for request compression? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269174,269260#msg-269260 From max at clements.za.net Mon Aug 29 03:35:32 2016 From: max at clements.za.net (Max Clements) Date: Sun, 28 Aug 2016 20:35:32 -0700 Subject: Request Compression In-Reply-To: References: <7fd6efff0272640cda5912457a8c1938.NginxMailingListEnglish@forum.nginx.org> Message-ID: When you say "request compression" I guess you referring to entity compression (for example in Post Requests). Nginx ngx_http_gzip_module does NOT support request entity decompression - but you can do this in LUA. An example on how to do this is here: http://www.pataliebre.net/howto-make-nginx-decompress-a-gzipped-request.html --Max On Sun, Aug 28, 2016 at 8:22 PM, serendipity30 wrote: > Anyone has used this? Is gzip_static used for request compression? > > Thanks > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269174,269260#msg-269260 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Monday is an awful way to spend 1/7th of your life... From nginx-forum at forum.nginx.org Mon Aug 29 08:03:10 2016 From: nginx-forum at forum.nginx.org (NuLL3rr0r) Date: Mon, 29 Aug 2016 04:03:10 -0400 Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate? Message-ID: Hi there, I have a VPS with 14 domains and I setup letskencrypt to automatically retrieve a separate certificate for each domain with all sub-domains included. So, I have 14 certs. Obviously, putting all domains in one cert is not an option because soon I'll hit the maximum 100 domain/sub-domain per cert for Letsencrypt. So, I was happy for a month until I found out that nginx serves wrong certs for all domains except one (the one that it automatically picks up - or, I'll set - as the default server for port 443). After a lot of headache I found out that each SSL cert must have its own IP not a shared one. Then also I found out there is SNI as a workaround for this issue. $ nginx -V TLS SNI support enabled So make the long story short; The problem is no matter what I do nginx stubbornly serve's the wrong cert: $ curl --insecure -v https://babaei.net 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' * Server certificate: * subject: CN=babaei.net * start date: Aug 28 13:30:00 2016 GMT * expire date: Nov 26 13:30:00 2016 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Connection #0 to host babaei.net left intact $ curl --insecure -v https://learnmyway.net 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' * Server certificate: * subject: CN=babaei.net * start date: Aug 28 13:30:00 2016 GMT * expire date: Nov 26 13:30:00 2016 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Connection #0 to host learnmyway.net left intact $ curl --insecure -v https://3rr0r.org 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' * Server certificate: * subject: CN=babaei.net * start date: Aug 28 13:30:00 2016 GMT * expire date: Nov 26 13:30:00 2016 GMT * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. * Connection #0 to host 3rr0r.org left intact And, don't get me wrong the actual certs are what they are supposed to be: $ openssl x509 -noout -subject -in /path/to/certs/babaei.net.pem subject= /CN=babaei.net $ openssl x509 -noout -subject -in /path/to/certs/learnmyway.net.pem subject= /CN=learnmyway.net $ openssl x509 -noout -subject -in /path/to/certs/3rr0r.org.pem subject= /CN=3rr0r.org So, let's say we have two domains alpha.com and omega.com. How would you configure SNI enabled nginx to serve the right SSL cert for each? server { server_tokens off; listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.alpha.com; ssl on; ssl_certificate /path/to/alpha.com/cert.pem; ssl_certificate_key /path/to/alpha.com/key.pem; } server { server_tokens off; listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.omega.com; ssl on; ssl_certificate /path/to/omega.com/cert.pem; ssl_certificate_key /path/to/omega.com/key.pem; } Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269263,269263#msg-269263 From nginx-forum at forum.nginx.org Mon Aug 29 09:37:44 2016 From: nginx-forum at forum.nginx.org (henry_nginx_profile) Date: Mon, 29 Aug 2016 05:37:44 -0400 Subject: NGINX SSL configuration In-Reply-To: <20160825115405.GC24741@mdounin.ru> References: <20160825115405.GC24741@mdounin.ru> Message-ID: <9e6806e2333f1088afb1e44a2cf1024c.NginxMailingListEnglish@forum.nginx.org> thank you very much. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269177,269265#msg-269265 From mdounin at mdounin.ru Mon Aug 29 10:49:10 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Aug 2016 13:49:10 +0300 Subject: Nginx SNI and Letsencrypt on FreeBSD; Wrong certificate? In-Reply-To: References: Message-ID: <20160829104910.GD1855@mdounin.ru> Hello! On Mon, Aug 29, 2016 at 04:03:10AM -0400, NuLL3rr0r wrote: [...] > So make the long story short; The problem is no matter what I do nginx > stubbornly serve's the wrong cert: > > $ curl --insecure -v https://babaei.net 2>&1 | awk 'BEGIN { cert=0 } > /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' > * Server certificate: > * subject: CN=babaei.net > * start date: Aug 28 13:30:00 2016 GMT > * expire date: Nov 26 13:30:00 2016 GMT > * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 > * SSL certificate verify ok. > * Connection #0 to host babaei.net left intact > > $ curl --insecure -v https://learnmyway.net 2>&1 | awk 'BEGIN { cert=0 } > /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }' > * Server certificate: > * subject: CN=babaei.net [...] > So, let's say we have two domains alpha.com and omega.com. How would you > configure SNI enabled nginx to serve the right SSL cert for each? > > server { > server_tokens off; > > listen 443 ssl http2; > listen [::]:443 ssl http2; > server_name www.alpha.com; Note that the name requested must be listed in the server_name directive. Names not listed are expected to be handled in the default server{} block, and probably this is what happens in your case as you request names without "www", but your configuration contains only names with "www" prefix. Additional reading: http://nginx.org/en/docs/http/server_names.html http://nginx.org/en/docs/http/configuring_https_servers.html > ssl on; > ssl_certificate /path/to/alpha.com/cert.pem; > ssl_certificate_key /path/to/alpha.com/key.pem; Just a side note: "ssl on" is not needed as long as you use "listen ... ssl". -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Mon Aug 29 11:36:49 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 29 Aug 2016 13:36:49 +0200 Subject: Request Compression In-Reply-To: References: <7fd6efff0272640cda5912457a8c1938.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is no standard for request compression. HTTP 2 has header compression built in, but if you want to compress request bodies, you have to devise your own solution. On Mon, Aug 29, 2016 at 5:22 AM, serendipity30 wrote: > Anyone has used this? Is gzip_static used for request compression? > > Thanks > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269174,269260#msg-269260 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Aug 29 14:04:08 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Mon, 29 Aug 2016 10:04:08 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <20160828135156.GA1855@mdounin.ru> References: <20160828135156.GA1855@mdounin.ru> Message-ID: Thanks for your prompt response. Let's a client is sending pipelined requests on the client side and nginx has multiple upstream keepalive connections. Are you saying that NGINX will NOT pipeline on upstream side even though it is receiving pipelined requests on client side? Is there a way to close an upstream keepalive after a threshold of requests is reached (*max requests") same as keepalive_requests on client side? Is there a reason why this feature is not present? Thanks again! -- Hakim Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269270#msg-269270 From mdounin at mdounin.ru Mon Aug 29 14:32:17 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Aug 2016 17:32:17 +0300 Subject: How to disable request pipelining on nginx upstream In-Reply-To: References: <20160828135156.GA1855@mdounin.ru> Message-ID: <20160829143217.GF1855@mdounin.ru> Hello! On Mon, Aug 29, 2016 at 10:04:08AM -0400, hkahlouche wrote: > Thanks for your prompt response. > Let's a client is sending pipelined requests on the client side and nginx > has multiple upstream keepalive connections. > Are you saying that NGINX will NOT pipeline on upstream side even though it > is receiving pipelined requests on client side? Yes, nginx will process requests one-by-one and won't pipeline requests to upstream. > Is there a way to close an upstream keepalive after a threshold of requests > is reached (*max requests") same as keepalive_requests on client side? Is > there a reason why this feature is not present? No, it's not something currently implemented. It's not considered needed as upstream servers can be easily configured to do this instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Aug 29 14:41:49 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Mon, 29 Aug 2016 10:41:49 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <20160829143217.GF1855@mdounin.ru> References: <20160829143217.GF1855@mdounin.ru> Message-ID: <311a0a84c599c0df8b18c9a515a0c229.NginxMailingListEnglish@forum.nginx.org> Hi I have a question the other way. how to enable pipelining on upstream side? or atleast how to make nginx open multiple loopack connections to serve requests pipelined from client side? Thanks Prasad. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269272#msg-269272 From nginx-forum at forum.nginx.org Mon Aug 29 16:23:34 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Mon, 29 Aug 2016 12:23:34 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <20160829143217.GF1855@mdounin.ru> References: <20160829143217.GF1855@mdounin.ru> Message-ID: > Yes, nginx will process requests one-by-one and won't pipeline > requests to upstream. So, you confirm that the current implementation of nginx doesn't pipeline towards upstream, and there is no way to enable that functionality? > No, it's not something currently implemented. It's not considered > needed as upstream servers can be easily configured to do this > instead. Which configuration parameter of upstream server can be used for that? According to the documentation, we have max_fails. Is that what you were referring to? max_conns is for connections (not for the requests) and it is present only on nginx plus. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269273#msg-269273 From mdounin at mdounin.ru Mon Aug 29 18:33:01 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Aug 2016 21:33:01 +0300 Subject: How to disable request pipelining on nginx upstream In-Reply-To: References: <20160829143217.GF1855@mdounin.ru> Message-ID: <20160829183301.GG1855@mdounin.ru> Hello! On Mon, Aug 29, 2016 at 12:23:34PM -0400, hkahlouche wrote: > > Yes, nginx will process requests one-by-one and won't pipeline > > requests to upstream. > > So, you confirm that the current implementation of nginx doesn't pipeline > towards upstream, and there is no way to enable that functionality? Yes. > > No, it's not something currently implemented. It's not considered > > needed as upstream servers can be easily configured to do this > > instead. > > Which configuration parameter of upstream server can be used for that? > According to the documentation, we have max_fails. Is that what you were > referring to? max_conns is for connections (not for the requests) and it is > present only on nginx plus. I'm talking about upstream server, not the "server" directive in the "upstream" block. Assuming you are using nginx as an upstream server you should use keepalive_requests. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Aug 29 19:52:57 2016 From: nginx-forum at forum.nginx.org (hkahlouche) Date: Mon, 29 Aug 2016 15:52:57 -0400 Subject: How to disable request pipelining on nginx upstream In-Reply-To: <20160829183301.GG1855@mdounin.ru> References: <20160829183301.GG1855@mdounin.ru> Message-ID: <1a7a0b0bba45d2c36999bd50c31b82d6.NginxMailingListEnglish@forum.nginx.org> Hello, > I'm talking about upstream server, not the "server" directive in > the "upstream" block. Assuming you are using nginx as an upstream > server you should use keepalive_requests. We are not using nginx on the upstream side (we have some legacy server), this is why I was looking for keepalive_requests on the upstream side, or something to better control the upstream keepalive connections (for instance when they start failing or just close after a certain threshold of requests reached). Something like: upstream backend { server 127.0.0.1:8080; keepalive 10; keepalive_requests 10; ##max requests per connection } It could be a good thing to add in nginx upstream module. It would allow controlling when to close a keepalive upstream connection and setup a new one. Thanks! -- Hakim Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269248,269276#msg-269276 From nginx-forum at forum.nginx.org Mon Aug 29 23:54:11 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 29 Aug 2016 19:54:11 -0400 Subject: disable .php files uploads using php (php-fpm) In-Reply-To: <48351698-3BB4-463D-998E-F4C757F0FA73@cretaforce.gr> References: <48351698-3BB4-463D-998E-F4C757F0FA73@cretaforce.gr> Message-ID: <5281022baec07bc8df212ddeb941ad85.NginxMailingListEnglish@forum.nginx.org> Christos Chatzaras Wrote: ------------------------------------------------------- > Is any way to get the body of a php post upload to match using regex > the filename of a php upload? I want to block file uploads with .php > extension. I found that I can do it with nasxi but I want to see if I > can avoid it. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx To disable cookies i do this. fastcgi_param HTTP_COOKIE ""; PHP accepts the following server vars. http://php.net/manual/en/reserved.variables.php It says file uploads are files so it would be this fastcgi_param HTTP_FILES ""; But if that does not work you may need to do. fastcgi_param HTTP_POST ""; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269253,269280#msg-269280 From r1ch+nginx at teamliquid.net Tue Aug 30 11:04:10 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 30 Aug 2016 13:04:10 +0200 Subject: disable .php files uploads using php (php-fpm) In-Reply-To: <5281022baec07bc8df212ddeb941ad85.NginxMailingListEnglish@forum.nginx.org> References: <48351698-3BB4-463D-998E-F4C757F0FA73@cretaforce.gr> <5281022baec07bc8df212ddeb941ad85.NginxMailingListEnglish@forum.nginx.org> Message-ID: File uploads are passed in the request body, not the headers so you cannot disable or otherwise affect them by setting HTTP_X variables. This is a job for your backend as nginx does not really interact with post body contents. On Tue, Aug 30, 2016 at 1:54 AM, c0nw0nk wrote: > Christos Chatzaras Wrote: > ------------------------------------------------------- > > Is any way to get the body of a php post upload to match using regex > > the filename of a php upload? I want to block file uploads with .php > > extension. I found that I can do it with nasxi but I want to see if I > > can avoid it. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > To disable cookies i do this. > > fastcgi_param HTTP_COOKIE ""; > > PHP accepts the following server vars. > > http://php.net/manual/en/reserved.variables.php > > It says file uploads are files so it would be this > > fastcgi_param HTTP_FILES ""; > > But if that does not work you may need to do. > > fastcgi_param HTTP_POST ""; > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269253,269280#msg-269280 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Aug 30 18:25:28 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 30 Aug 2016 14:25:28 -0400 Subject: Nginx multiple upstream map conditions Message-ID: So this is a fun one. As allot of people probably already know you can't use "IF" on upstream values since if conditions are executed before any "$upstream_" conditions. But with a map directive it might just be possible to combine 2 upstream maps together and have a output based on the conditions matched. Here is what I am trying to achieve. If the upstream returns with a logged_in value of 1 then we forward and pass the Set-Cookie header to the client "add_header". Else the "add_header Set-Cookie" should be empty. ##logged in value 0 = guest ##logged in value 1 = has an account map $upstream_cookie_logged_in $upstream_logged_in_value { default $upstream_cookie_logged_in; } map $upstream_http_set_cookie $upstream_md5_value { default $upstream_http_set_cookie; } add_header Set-Cookie $upstream_md5_value; ##Only want to send the MD5 cookie when the logged in value is 1 fastcgi_hide_header Set-Cookie; ##Removed the set-cookie and to be added in only when conditions are met. "add_header" Thanks in advance to anyone who can help me out :D Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269296#msg-269296 From nginx-forum at forum.nginx.org Wed Aug 31 05:21:13 2016 From: nginx-forum at forum.nginx.org (jebina) Date: Wed, 31 Aug 2016 01:21:13 -0400 Subject: NGINX to support websocket client on the same port Message-ID: A quick question, Does Nginx support websocket client. I have a webserver that uses NGINX and i use a websocket server for which NGINX acts as proxy. In the same port , can i use websocket client to initiate a connection with the external websocket server? config is here worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream websocket { server 192.168.5.16:8080; } server { listen 9080; server_name localhost; location / { root html; index index.html index.htm; } location /socket { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } location /test{ hello_world; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269299,269299#msg-269299 From nginx-forum at forum.nginx.org Wed Aug 31 08:17:53 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Wed, 31 Aug 2016 04:17:53 -0400 Subject: Log the headers sent by nginx to upstream server Message-ID: <223c39bdca26513e4a484fe253b66f38.NginxMailingListEnglish@forum.nginx.org> My client sends a request to nginx with a http header set. This header is overwritten by nginx before passing along the reqest to the downstream server. I want to log the header after modification by nginx. I understand that we can log the header sent by client using "$http_". Is there some embedded variable or some other way I can use which will log the header sent to the downstream server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269300,269300#msg-269300 From nginx-forum at forum.nginx.org Wed Aug 31 08:30:51 2016 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 31 Aug 2016 04:30:51 -0400 Subject: log status actually not real status Message-ID: <72cc96a2d32ffc3fcfc9acd13e6486ac.NginxMailingListEnglish@forum.nginx.org> Nginx would log status to 200 after response header had sent when upstream prematurely closed connection I think nginx should log status to 502, even though client recv 200 static u_char * ngx_http_log_status(ngx_http_request_t *r, u_char *buf, ngx_http_log_op_t *op) { ngx_uint_t status; if (r->err_status) { status = r->err_status; } else if (r->headers_out.status) { status = r->headers_out.status; } else if (r->http_version == NGX_HTTP_VERSION_9) { status = 9; } else { status = 0; } return ngx_sprintf(buf, "%03ui", status); } r->err_status should be set in this situation Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269301,269301#msg-269301 From maxozerov at i-free.com Wed Aug 31 10:50:45 2016 From: maxozerov at i-free.com (Maxim Ozerov) Date: Wed, 31 Aug 2016 10:50:45 +0000 Subject: Log the headers sent by nginx to upstream server In-Reply-To: <223c39bdca26513e4a484fe253b66f38.NginxMailingListEnglish@forum.nginx.org> References: <223c39bdca26513e4a484fe253b66f38.NginxMailingListEnglish@forum.nginx.org> Message-ID: "Log format can contain common variables, and variables that exist only at the time of a log write" So, other options I do not see, except ngx_http_log_module But, .. interesting thing is what it means to be overwritten?) Nginx, some adjustments in request headers, receives from the client: - Nginx gets rid of any empty headers. - Ng.By default, will consider any header that contains underscores as invalid. - The "Host" header is re-written to the value defined by the $proxy_host variable - (ex.: proxy_set_header Host $host;) - The "Connection" header is changed to "close". -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Sushma Sent: Wednesday, August 31, 2016 11:18 AM To: nginx at nginx.org Subject: Log the headers sent by nginx to upstream server My client sends a request to nginx with a http header set. This header is overwritten by nginx before passing along the reqest to the downstream server. I want to log the header after modification by nginx. I understand that we can log the header sent by client using "$http_". Is there some embedded variable or some other way I can use which will log the header sent to the downstream server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269300,269300#msg-269300 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Wed Aug 31 10:55:28 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 31 Aug 2016 13:55:28 +0300 Subject: NGINX to support websocket client on the same port In-Reply-To: References: Message-ID: <33472529.JOdbpPg5Uv@vbart-workstation> On Wednesday 31 August 2016 01:21:13 jebina wrote: > A quick question, Does Nginx support websocket client. > > I have a webserver that uses NGINX and i use a websocket server for which > NGINX acts as proxy. In the same port , can i use websocket client to > initiate a connection with the external websocket server? > [..] Yes, you can. http://nginx.org/en/docs/http/websocket.html wbr, Valentin V. Bartenev From vbart at nginx.com Wed Aug 31 10:57:51 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 31 Aug 2016 13:57:51 +0300 Subject: log status actually not real status In-Reply-To: <72cc96a2d32ffc3fcfc9acd13e6486ac.NginxMailingListEnglish@forum.nginx.org> References: <72cc96a2d32ffc3fcfc9acd13e6486ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3218790.TznrGsV0uE@vbart-workstation> On Wednesday 31 August 2016 04:30:51 crasyangel wrote: > Nginx would log status to 200 after response header had sent when upstream > prematurely closed connection > I think nginx should log status to 502, even though client recv 200 [..] Why do you think so? wbr, Valentin V. Bartenev From vbart at nginx.com Wed Aug 31 11:04:09 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 31 Aug 2016 14:04:09 +0300 Subject: Nginx multiple upstream map conditions In-Reply-To: References: Message-ID: <1590315.uaCX5CsKqb@vbart-workstation> On Tuesday 30 August 2016 14:25:28 c0nw0nk wrote: > So this is a fun one. > > As allot of people probably already know you can't use "IF" on upstream > values since if conditions are executed before any "$upstream_" conditions. > > But with a map directive it might just be possible to combine 2 upstream > maps together and have a output based on the conditions matched. > > Here is what I am trying to achieve. > > If the upstream returns with a logged_in value of 1 then we forward and pass > the Set-Cookie header to the client "add_header". > Else the "add_header Set-Cookie" should be empty. > > ##logged in value 0 = guest > ##logged in value 1 = has an account > map $upstream_cookie_logged_in $upstream_logged_in_value { > default $upstream_cookie_logged_in; > } > > map $upstream_http_set_cookie $upstream_md5_value { > default $upstream_http_set_cookie; > } > > > add_header Set-Cookie $upstream_md5_value; ##Only want to send the MD5 > cookie when the logged in value is 1 > > > fastcgi_hide_header Set-Cookie; ##Removed the set-cookie and to be added in > only when conditions are met. "add_header" > > > Thanks in advance to anyone who can help me out :D > [..] map $upstream_cookie_logged_in $upstream_md5_value { 1 $upstream_http_set_cookie; } add_header Set-Cookie $upstream_md5_value; wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed Aug 31 17:30:30 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 31 Aug 2016 13:30:30 -0400 Subject: Nginx multiple upstream map conditions In-Reply-To: <1590315.uaCX5CsKqb@vbart-workstation> References: <1590315.uaCX5CsKqb@vbart-workstation> Message-ID: <1b0e17bed38d73f48a01ab645da09303.NginxMailingListEnglish@forum.nginx.org> Thanks works a treat is it possible or allowed to do the following in a nginx upstream map ? and if so how i can't figure it out. I cache with the following key. fastcgi_cache_key "$session_id_value$scheme$host$request_uri$request_method"; if the upstream_cookie_logged_in value is not equal to 1 how can I set $session_id_value ''; make empty Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269319#msg-269319 From nginx-forum at forum.nginx.org Wed Aug 31 17:42:39 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 31 Aug 2016 13:42:39 -0400 Subject: Nginx multiple upstream map conditions In-Reply-To: <1590315.uaCX5CsKqb@vbart-workstation> References: <1590315.uaCX5CsKqb@vbart-workstation> Message-ID: <5fd9dc39dcf08b781f78f7391fea6295.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Tuesday 30 August 2016 14:25:28 c0nw0nk wrote: > > So this is a fun one. > > > > As allot of people probably already know you can't use "IF" on > upstream > > values since if conditions are executed before any "$upstream_" > conditions. > > > > But with a map directive it might just be possible to combine 2 > upstream > > maps together and have a output based on the conditions matched. > > > > Here is what I am trying to achieve. > > > > If the upstream returns with a logged_in value of 1 then we forward > and pass > > the Set-Cookie header to the client "add_header". > > Else the "add_header Set-Cookie" should be empty. > > > > ##logged in value 0 = guest > > ##logged in value 1 = has an account > > map $upstream_cookie_logged_in $upstream_logged_in_value { > > default $upstream_cookie_logged_in; > > } > > > > map $upstream_http_set_cookie $upstream_md5_value { > > default $upstream_http_set_cookie; > > } > > > > > > add_header Set-Cookie $upstream_md5_value; ##Only want to send the > MD5 > > cookie when the logged in value is 1 > > > > > > fastcgi_hide_header Set-Cookie; ##Removed the set-cookie and to be > added in > > only when conditions are met. "add_header" > > > > > > Thanks in advance to anyone who can help me out :D > > > [..] > > map $upstream_cookie_logged_in $upstream_md5_value { > 1 $upstream_http_set_cookie; > } > > add_header Set-Cookie $upstream_md5_value; > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks works a treat is it possible or allowed to do the following in a nginx upstream map ? and if so how i can't figure it out. I cache with the following key. fastcgi_cache_key "$session_id_value$scheme$host$request_uri$request_method"; if the upstream_cookie_logged_in value is not equal to 1 how can I set $session_id_value ''; make empty Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269321#msg-269321 From gavidmogarrr at yahoo.com Wed Aug 31 19:09:58 2016 From: gavidmogarrr at yahoo.com (gavidmogarrr) Date: Wed, 31 Aug 2016 22:09:58 +0300 Subject: looks nice Message-ID: <0000564065d9$0e86761f$49197819$@yahoo.com> Hey friend, I've recently came accross that amazing stuff, it looks nice I think, ytake a look All best, gavidmogarrr -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Aug 31 21:57:22 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 31 Aug 2016 22:57:22 +0100 Subject: Nginx multiple upstream map conditions In-Reply-To: <1b0e17bed38d73f48a01ab645da09303.NginxMailingListEnglish@forum.nginx.org> References: <1590315.uaCX5CsKqb@vbart-workstation> <1b0e17bed38d73f48a01ab645da09303.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160831215722.GZ12280@daoine.org> On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote: Hi there, > Thanks works a treat is it possible or allowed to do the following in a > nginx upstream map ? and if so how i can't figure it out. I think it is logically impossible. > I cache with the following key. > fastcgi_cache_key > "$session_id_value$scheme$host$request_uri$request_method"; fastcgi_cache_key is the thing that nginx calculates from the request, before it decides whether to send the response from cache, or whether to pass the request to upstream. > if the upstream_cookie_logged_in value is not equal to 1 how can I set > $session_id_value ''; make empty $upstream_cookie_something is part of the response from upstream, so is not available to nginx at the time that it is calculating fastcgi_cache_key for the "read from cache or not" decision. Am I missing something? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Aug 31 22:44:50 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 31 Aug 2016 18:44:50 -0400 Subject: Nginx multiple upstream map conditions In-Reply-To: <20160831215722.GZ12280@daoine.org> References: <20160831215722.GZ12280@daoine.org> Message-ID: <280fd0b1018a1e7f7c2fa77767e84c52.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote: > > Hi there, > > > Thanks works a treat is it possible or allowed to do the following > in a > > nginx upstream map ? and if so how i can't figure it out. > > I think it is logically impossible. > > > I cache with the following key. > > fastcgi_cache_key > > "$session_id_value$scheme$host$request_uri$request_method"; > > fastcgi_cache_key is the thing that nginx calculates from the request, > before it decides whether to send the response from cache, or whether > to pass the request to upstream. > > > if the upstream_cookie_logged_in value is not equal to 1 how can I > set > > $session_id_value ''; make empty > > $upstream_cookie_something is part of the response from upstream, > so is not available to nginx at the time that it is calculating > fastcgi_cache_key for the "read from cache or not" decision. > > Am I missing something? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks :) so changes to that value will have no effect. What about the following scenario. I remove all Set-Cookie headers. fastcgi_hide_header Set-Cookie; Then add them back in with : add_header Set-Cookie "$upstream_http_set_cookie"; Will requests that get a cache hit ever contain a Set-Cookie header or isit only the ones that reach the origin php server. >From my tests it appears to be working that no set-cookie headers are present on "X-Cache-Status : HIT" headers. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269296,269328#msg-269328