From mdounin at mdounin.ru Wed May 1 00:12:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 May 2013 04:12:17 +0400 Subject: limit_req and IP white listing on 0.8.55 In-Reply-To: <2a17a81a7c814883063cc8a7aab4cdf7.NginxMailingListEnglish@forum.nginx.org> References: <2a17a81a7c814883063cc8a7aab4cdf7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130501001217.GY10443@mdounin.ru> Hello! On Tue, Apr 30, 2013 at 07:25:22PM -0400, nauger wrote: > Hello! > > I've followed this reference: > > http://forum.nginx.org/read.php?2,228956,228961#msg-228961 > > To produce the following config: > http { > geo $public_vs_our_networks { > default 1; > 127.0.0.1/32 0; > ... my networks ... > } > map $public_vs_our_networks $limit_public { > 1 $binary_remote_addr; > 0 ""; > } > limit_req_zone $limit_public zone=public_facing_network:10m > rate=40r/m; > ... > server { > ... > location / { > ... > limit_req zone=public_facing_network burst=5 > nodelay; > ... > proxy_pass http://my_upstream; > } > } > } > > Unfortunately-- my error logs quickly filled up with clients who were > incorrectly rate limited. It was as if this configuration created 1 bucket > for ALL the public facing clients, as opposed to individually bucketing each > public client by their $binary_remote_addr. Please advise on what I might > be missing. Variables can be used as a result of a map only in nginx 0.9.0+, see http://nginx.org/r/map. You have to upgrade for the above to work. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed May 1 04:04:12 2013 From: nginx-forum at nginx.us (nauger) Date: Wed, 01 May 2013 00:04:12 -0400 Subject: limit_req and IP white listing on 0.8.55 In-Reply-To: <20130501001217.GY10443@mdounin.ru> References: <20130501001217.GY10443@mdounin.ru> Message-ID: Hi Maxim, Thank you-- that makes sense. Before upgrading, is it possible to implement this white list behavior using a different mechanism? Thanks again, -Nick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238757,238760#msg-238760 From reallfqq-nginx at yahoo.fr Wed May 1 07:19:37 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 May 2013 03:19:37 -0400 Subject: Documentation of post_action Message-ID: As Maxim pointed out specifically in a not-that-old message, the 'post_action' directive is left undocumented on purpose, since it implies carefulness and knowledge to manipulate properly. Although my request might seem naive to certain people, wouldn't it be more profitable to document the directive properly and highlight details required to be known (either specifically or talking about the approach people should have in mind before using it) to allow its public use which can prove highly interesting? Transparency appeals me more than obscurity in order to make Nginx an even more amazing piece of work... and help widening the possibilities based on it. Some partial documentation appears on the Wiki anyway. What about an official support of that intention? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 1 09:14:46 2013 From: nginx-forum at nginx.us (Rancor) Date: Wed, 01 May 2013 05:14:46 -0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: <20130430211709.GM19561@lo0.su> References: <20130430211709.GM19561@lo0.su> Message-ID: Ruslan Ermilov Wrote: ------------------------------------------------------- > On Tue, Apr 30, 2013 at 09:03:20AM -0400, Rancor wrote: > > nginx detects the IPv6 support in libgeoip by trying to compile > the following code snippet: > > #include > #include > > int > main(void) > { > printf("%d\n", GEOIP_CITY_EDITION_REV0_V6); > return (0); > } > > Does it compile OK on your system? > Hi, thanks again for your reply. The code above doesn't compile on my system: test.c: In function ?main?: test.c:7: error: ?GEOIP_CITY_EDITION_REV0_V6? undeclared (first use in this function) test.c:7: error: (Each undeclared identifier is reported only once test.c:7: error: for each function it appears in.) Seems that the libgeoip1 in debian squeeze in version 1.4.7~beta6+dfsg-1 doesn't support this. Will test this again when wheezy is released at the next weekend. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235108,238763#msg-238763 From eswar7028 at gmail.com Wed May 1 09:34:17 2013 From: eswar7028 at gmail.com (ESWAR RAO) Date: Wed, 1 May 2013 15:04:17 +0530 Subject: Reg. POST data body Message-ID: Hi All, I am trying to collect the POST request body using $request_body but I am unable to collect it in config file: # curl -X POST -d "param1=value1¶m2=value2" ' http://localhost:8031/test1/test2/' --header "Content-Type:application/json" location /test1 { ........................... if ( $request_body ~ (.*?)(=)(.*?)(&)(.*?)(=)(.*) ) { echo $1; echo $2; echo $3; } } But when I tried to use the nginx-echo module, I was able to print it but I couldn't use it in if(). location /test1 { ........................... echo_read_request_body; echo $request_body; set $foo $echo_request_body; echo $foo; if ( $foo ~ (.*?)(=)(.*?)(&)(.*?)(=)(.*) ) { echo $1; echo $2; echo $3; } } #curl -X POST -d "param1=value1¶m2=value2" ' http://localhost:8031/test1/test2/' --header "Content-Type:application/json" param1=value1¶m2=value2 foo value is not getting echoed out. So the condition is not entering if(). Actually I intention is to load balance according to data in POST request. Thanks Eswar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ussray_00 at yahoo.com Wed May 1 14:26:00 2013 From: ussray_00 at yahoo.com (Russ Lavoy) Date: Wed, 1 May 2013 07:26:00 -0700 (PDT) Subject: HTTP Basic Auth question Message-ID: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> Hello, I am running nginx as a reverse proxy to a python application. ?I am wondering how I would be able to pass ONLY the user account and not the password. ?Can this be done? Thanks! From yaoweibin at gmail.com Wed May 1 15:19:39 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Wed, 1 May 2013 23:19:39 +0800 Subject: [ANNOUNCE] Tengine-1.4.5 is released Message-ID: Hi folks, We are glad to announce that Tengine-1.4.5 (development version) has been released. You can either checkout the source code from github: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-1.4.5.tar.gz In this release, we have added the consistent_hash module which dispatches requests to upstream servers based on consistent hashing algorithm (http://en.wikipedia.org/wiki/Consistent_hashing). It is better than the ip_hash module, by decreasing the possibility of hash key to be remapped when the number of servers changes. We also introduced the "keepalive_timeout" directive to set timeout for the upstream keepalive connections. It can reduce the sum of idle connections with backend servers. Two new configure script options '--enable-mods-shared=all' and '--enable-mods-static=all' were added. Now you can compile all the modules to be shared or static. The full changelog is as follows: *) Feature: added the consistent_hash module which dispatches requests to upstream servers based on consistent hashing algorithm of a variable specified. (dinic) *) Feature: added the "keepalive_timeout" directive to set timeout for upstream keepalive connections. (jinglong) *) Feature: now the configure script supports compilation of all modules to be shared or static. (monadbobo) *) Change: updated the Lua module to 0.7.19. (jinglong) *) Change: merged the changes of Nginx-1.2.8. (yaoweibin) *) Bugfix: fixed the compile warnings of syslog and upstream_check modules. (magicbear) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Wed May 1 17:13:34 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 1 May 2013 10:13:34 -0700 Subject: Nginx 1.4 problem Message-ID: Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake aborted by server's ACK+RST packet, but netstat shows server is listening on that port. Any config has been changed since Nginx 1.2 to 1.4 in this regard? Thanks, - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed May 1 17:17:41 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 May 2013 13:17:41 -0400 Subject: HTTP Basic Auth question In-Reply-To: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> References: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> Message-ID: To pass the nginx user to a fastcgi backend (PHP), I have to explicitly specify it using the following directive: fastcgi_param MY_USER $remote_user; I suppose you can do the same with proxy_pass? I dunno how to remove an automatically forwarded parameter though... Maybe overwriting it with an empty string? --- *B. R.* On Wed, May 1, 2013 at 10:26 AM, Russ Lavoy wrote: > Hello, > > I am running nginx as a reverse proxy to a python application. I am > wondering how I would be able to pass ONLY the user account and not the > password. Can this be done? > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 1 17:30:14 2013 From: nginx-forum at nginx.us (lflacayo) Date: Wed, 01 May 2013 13:30:14 -0400 Subject: Question about UPSTREAM configuration... Message-ID: <334eee3fe7a517e9172850ddc425563a.NginxMailingListEnglish@forum.nginx.org> Hello All, Hope some one out there can help me a clear up my understanding. If I have 5 server on an upstream configuration list, to help load balance the load across the 5 servers. Once in a while I am seen "(110: Connection timed out) while connecting to upstream". Now what I am not clear on is that message as a result of trying all the servers on the upstream list or after failing on a single entry on the upstream list? if it is a single server then it should transfer the request to the next server on the list, right? Thank you. Luis Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238770,238770#msg-238770 From mdounin at mdounin.ru Wed May 1 17:31:35 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 May 2013 21:31:35 +0400 Subject: limit_req and IP white listing on 0.8.55 In-Reply-To: References: <20130501001217.GY10443@mdounin.ru> Message-ID: <20130501173134.GZ10443@mdounin.ru> Hello! On Wed, May 01, 2013 at 12:04:12AM -0400, nauger wrote: > Hi Maxim, > > Thank you-- that makes sense. Before upgrading, is it possible to implement > this white list behavior using a different mechanism? You may try to use if + set at server level instead of map. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed May 1 17:40:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 May 2013 21:40:09 +0400 Subject: Nginx 1.4 problem In-Reply-To: References: Message-ID: <20130501174009.GA10443@mdounin.ru> Hello! On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake > aborted by server's ACK+RST packet, but netstat shows server > is listening on that port. Any config has been changed since Nginx 1.2 > to 1.4 in this regard? There are lots of changes in 1.4.0 compared to 1.2.x, see http://nginx.org/en/CHANGES-1.4. In this particular case I would recommend checking if nginx is listening on the port, the address, and the protocol in question. Note that since 1.3.4 ipv6only listen option is on by default, and if you have listen [::]:80; in your config, it no longer implies IPv4 addresses regardless of your OS settings. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed May 1 17:42:30 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 May 2013 21:42:30 +0400 Subject: Question about UPSTREAM configuration... In-Reply-To: <334eee3fe7a517e9172850ddc425563a.NginxMailingListEnglish@forum.nginx.org> References: <334eee3fe7a517e9172850ddc425563a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130501174230.GB10443@mdounin.ru> Hello! On Wed, May 01, 2013 at 01:30:14PM -0400, lflacayo wrote: > Hello All, > > Hope some one out there can help me a clear up my understanding. > > If I have 5 server on an upstream configuration list, to help load balance > the load across the 5 servers. Once in a while I am seen "(110: > Connection timed out) while connecting to upstream". Now what I am not > clear on is that message as a result of trying all the servers on the > upstream list or after failing on a single entry on the upstream list? It's about single server (and it should print server's address in the following text). > if > it is a single server then it should transfer the request to the next server > on the list, right? This depends on proxy_next_upstream directive, but usually the answer is yes. -- Maxim Dounin http://nginx.org/en/donation.html From aldernetwork at gmail.com Wed May 1 18:00:34 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 1 May 2013 11:00:34 -0700 Subject: Nginx 1.4 problem In-Reply-To: <20130501174009.GA10443@mdounin.ru> References: <20130501174009.GA10443@mdounin.ru> Message-ID: netstat -pln shows the server is waiting on that port. Yes, I have been using in server section listen [::]:80; What is supposed to be for IPV4 now? I'll go over the changelist later, Thanks, - Alder On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin wrote: > Hello! > > On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: > > > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake > > aborted by server's ACK+RST packet, but netstat shows server > > is listening on that port. Any config has been changed since Nginx 1.2 > > to 1.4 in this regard? > > There are lots of changes in 1.4.0 compared to 1.2.x, see > http://nginx.org/en/CHANGES-1.4. > > In this particular case I would recommend checking if nginx is > listening on the port, the address, and the protocol in question. > Note that since 1.3.4 ipv6only listen option is on by default, and > if you have > > listen [::]:80; > > in your config, it no longer implies IPv4 addresses regardless of > your OS settings. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Wed May 1 18:17:10 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 1 May 2013 11:17:10 -0700 Subject: Nginx 1.4 problem In-Reply-To: References: <20130501174009.GA10443@mdounin.ru> Message-ID: Just for clarity, I want to be listening on both IPv4 and IPv6 on the same port. On Wed, May 1, 2013 at 11:00 AM, Alder Network wrote: > netstat -pln shows the server is waiting on that port. > > Yes, I have been using in server section > listen [::]:80; > What is supposed to be for IPV4 now? > > I'll go over the changelist later, Thanks, > > - Alder > > > On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: >> >> > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake >> > aborted by server's ACK+RST packet, but netstat shows server >> > is listening on that port. Any config has been changed since Nginx 1.2 >> > to 1.4 in this regard? >> >> There are lots of changes in 1.4.0 compared to 1.2.x, see >> http://nginx.org/en/CHANGES-1.4. >> >> In this particular case I would recommend checking if nginx is >> listening on the port, the address, and the protocol in question. >> Note that since 1.3.4 ipv6only listen option is on by default, and >> if you have >> >> listen [::]:80; >> >> in your config, it no longer implies IPv4 addresses regardless of >> your OS settings. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 1 21:07:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 May 2013 01:07:59 +0400 Subject: Nginx 1.4 problem In-Reply-To: References: <20130501174009.GA10443@mdounin.ru> Message-ID: <20130501210759.GC10443@mdounin.ru> Hello! On Wed, May 01, 2013 at 11:17:10AM -0700, Alder Network wrote: > Just for clarity, I want to be listening on both IPv4 and IPv6 on the same > port. You have to write listen 80; listen [::]:80; to listen on both IPv4 and IPv6. > > > On Wed, May 1, 2013 at 11:00 AM, Alder Network wrote: > > > netstat -pln shows the server is waiting on that port. > > > > Yes, I have been using in server section > > listen [::]:80; > > What is supposed to be for IPV4 now? > > > > I'll go over the changelist later, Thanks, > > > > - Alder > > > > > > On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin wrote: > > > >> Hello! > >> > >> On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: > >> > >> > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake > >> > aborted by server's ACK+RST packet, but netstat shows server > >> > is listening on that port. Any config has been changed since Nginx 1.2 > >> > to 1.4 in this regard? > >> > >> There are lots of changes in 1.4.0 compared to 1.2.x, see > >> http://nginx.org/en/CHANGES-1.4. > >> > >> In this particular case I would recommend checking if nginx is > >> listening on the port, the address, and the protocol in question. > >> Note that since 1.3.4 ipv6only listen option is on by default, and > >> if you have > >> > >> listen [::]:80; > >> > >> in your config, it no longer implies IPv4 addresses regardless of > >> your OS settings. > >> > >> -- > >> Maxim Dounin > >> http://nginx.org/en/donation.html > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From paulnpace at gmail.com Wed May 1 21:18:44 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Wed, 1 May 2013 14:18:44 -0700 Subject: Nginx 1.4 problem In-Reply-To: <20130501210759.GC10443@mdounin.ru> References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: On Wed, May 1, 2013 at 2:07 PM, Maxim Dounin wrote: > Hello! > > On Wed, May 01, 2013 at 11:17:10AM -0700, Alder Network wrote: > >> Just for clarity, I want to be listening on both IPv4 and IPv6 on the same >> port. > > You have to write > > listen 80; > listen [::]:80; > > to listen on both IPv4 and IPv6. Doesn't that require ipv6only=on? listen 80; listen [::]:80 ipv6only=on; > >> >> >> On Wed, May 1, 2013 at 11:00 AM, Alder Network wrote: >> >> > netstat -pln shows the server is waiting on that port. >> > >> > Yes, I have been using in server section >> > listen [::]:80; >> > What is supposed to be for IPV4 now? >> > >> > I'll go over the changelist later, Thanks, >> > >> > - Alder >> > >> > >> > On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin wrote: >> > >> >> Hello! >> >> >> >> On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: >> >> >> >> > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake >> >> > aborted by server's ACK+RST packet, but netstat shows server >> >> > is listening on that port. Any config has been changed since Nginx 1.2 >> >> > to 1.4 in this regard? >> >> >> >> There are lots of changes in 1.4.0 compared to 1.2.x, see >> >> http://nginx.org/en/CHANGES-1.4. >> >> >> >> In this particular case I would recommend checking if nginx is >> >> listening on the port, the address, and the protocol in question. >> >> Note that since 1.3.4 ipv6only listen option is on by default, and >> >> if you have >> >> >> >> listen [::]:80; >> >> >> >> in your config, it no longer implies IPv4 addresses regardless of >> >> your OS settings. >> >> >> >> -- >> >> Maxim Dounin >> >> http://nginx.org/en/donation.html >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> > >> > > >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From aldernetwork at gmail.com Wed May 1 21:25:03 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 1 May 2013 14:25:03 -0700 Subject: Nginx 1.4 problem In-Reply-To: References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: I tried listen [::]:80 ipv6only=off; and the TCP connection went through, but got an http 400 bad response. It would be nice to have a corresponding config upgrade manual as well. Thanks, On Wed, May 1, 2013 at 2:18 PM, Paul N. Pace wrote: > On Wed, May 1, 2013 at 2:07 PM, Maxim Dounin wrote: > > Hello! > > > > On Wed, May 01, 2013 at 11:17:10AM -0700, Alder Network wrote: > > > >> Just for clarity, I want to be listening on both IPv4 and IPv6 on the > same > >> port. > > > > You have to write > > > > listen 80; > > listen [::]:80; > > > > to listen on both IPv4 and IPv6. > > Doesn't that require ipv6only=on? > > listen 80; > listen [::]:80 ipv6only=on; > > > > > >> > >> > >> On Wed, May 1, 2013 at 11:00 AM, Alder Network >wrote: > >> > >> > netstat -pln shows the server is waiting on that port. > >> > > >> > Yes, I have been using in server section > >> > listen [::]:80; > >> > What is supposed to be for IPV4 now? > >> > > >> > I'll go over the changelist later, Thanks, > >> > > >> > - Alder > >> > > >> > > >> > On Wed, May 1, 2013 at 10:40 AM, Maxim Dounin > wrote: > >> > > >> >> Hello! > >> >> > >> >> On Wed, May 01, 2013 at 10:13:34AM -0700, Alder Network wrote: > >> >> > >> >> > Tried to upgrade to just-released Nginx1.4. TCP 3-way hand-shake > >> >> > aborted by server's ACK+RST packet, but netstat shows server > >> >> > is listening on that port. Any config has been changed since Nginx > 1.2 > >> >> > to 1.4 in this regard? > >> >> > >> >> There are lots of changes in 1.4.0 compared to 1.2.x, see > >> >> http://nginx.org/en/CHANGES-1.4. > >> >> > >> >> In this particular case I would recommend checking if nginx is > >> >> listening on the port, the address, and the protocol in question. > >> >> Note that since 1.3.4 ipv6only listen option is on by default, and > >> >> if you have > >> >> > >> >> listen [::]:80; > >> >> > >> >> in your config, it no longer implies IPv4 addresses regardless of > >> >> your OS settings. > >> >> > >> >> -- > >> >> Maxim Dounin > >> >> http://nginx.org/en/donation.html > >> >> > >> >> _______________________________________________ > >> >> nginx mailing list > >> >> nginx at nginx.org > >> >> http://mailman.nginx.org/mailman/listinfo/nginx > >> >> > >> > > >> > > > > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 1 21:27:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 May 2013 01:27:42 +0400 Subject: Nginx 1.4 problem In-Reply-To: References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: <20130501212742.GD10443@mdounin.ru> Hello! On Wed, May 01, 2013 at 02:18:44PM -0700, Paul N. Pace wrote: > On Wed, May 1, 2013 at 2:07 PM, Maxim Dounin wrote: > > Hello! > > > > On Wed, May 01, 2013 at 11:17:10AM -0700, Alder Network wrote: > > > >> Just for clarity, I want to be listening on both IPv4 and IPv6 on the same > >> port. > > > > You have to write > > > > listen 80; > > listen [::]:80; > > > > to listen on both IPv4 and IPv6. > > Doesn't that require ipv6only=on? > > listen 80; > listen [::]:80 ipv6only=on; As of nginx 1.3.4+, it's on by default: [...] > >> >> Note that since 1.3.4 ipv6only listen option is on by default, and > >> >> if you have [...] -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Wed May 1 21:45:55 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 1 May 2013 22:45:55 +0100 Subject: HTTP Basic Auth question In-Reply-To: References: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> Message-ID: <20130501214555.GG27406@craic.sysops.org> On Wed, May 01, 2013 at 01:17:41PM -0400, B.R. wrote: Hi there, > To pass the nginx user to a fastcgi backend (PHP), I have to explicitly > specify it using the following directive: > fastcgi_param MY_USER $remote_user; > > I suppose you can do the same with proxy_pass? That's how I'd do it -- probably proxy_set_header if the python application is accessed using proxy_pass. > I dunno how to remove an automatically forwarded parameter though... Maybe > overwriting it with an empty string? The password is in the http header Authorization, so using proxy_hide_header to avoid sending that should be enough. > On Wed, May 1, 2013 at 10:26 AM, Russ Lavoy wrote: > > I am running nginx as a reverse proxy to a python application. I am > > wondering how I would be able to pass ONLY the user account and not the > > password. Can this be done? As above: how are the user and pass currently sent? It will be by "fastcgi_pass" or "proxy_pass" or something similar. Use the matching "_hide_header" directive on the correct header to avoid sending it. How do you want the user to be sent? Use the variable $remote_user and the matching "_set_header" or "_param" directive to send the provided username. f -- Francis Daly francis at daoine.org From gregm at servu.net.au Thu May 2 07:12:02 2013 From: gregm at servu.net.au (Greg M) Date: Thu, 2 May 2013 07:12:02 +0000 Subject: Help with nginx proxy store as a frontend MP4 Anycast CDN Message-ID: <758BD56B30B33343A02413D9CADBD3431003C475@SIXPRD0310MB372.apcprd03.prod.outlook.com> Hi, I have been reading up on the nginx documentation and also stackoverflow about creating an Anycast CDN based on nginx-frontends. We currently use nginx on the backend, and Squid for caching images and mp4's - however Squid is incredibly difficult to cache the mp4's as it cant handle the moov atom's, and thus is always just proxying direct from our servers. We've read that an nginx proxy "store" can actually store content locally - but how is that store defined and managed ? For example, we have 6 CDN endpoints around the world, each with 200G of SSD space, so ideally we'd like nginx to cache around 50-70G of videos (out of a total of 1.2TB on the backend). Here is our basic config: server { listen x.x.x.x:80; server_name cdn.blah.com; location ~ \.mp4$ { mp4; proxy_pass http://origin; proxy_store /cache/videocdn/$request_uri; proxy_store_access user:rw group:rw all:r; proxy_temp_path /cache/nginx; } } The first question I have is - how do we limit how much this above configuration will hold ? Do we need to manually run a bash script that checks for old mp4's based on access time and purge ? The final question is, how can we have all requests to initially check the local cache for content ? In the same way on this request was asked http://forum.nginx.org/read.php?2,11284,11284 Maxim advised its possible to return "X-Accel-Redirect" to the file in the store. We would also want this for in-progress caching attempts, for example if I requested cdn.blah.com/1.mp4 on a 2nd machine, before the first machine's request for the same file has had a chance to fully cache in the store. Thanks in advance for any assistance! Regards, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 2 13:54:33 2013 From: nginx-forum at nginx.us (lflacayo) Date: Thu, 02 May 2013 09:54:33 -0400 Subject: Question about UPSTREAM configuration... In-Reply-To: <20130501174230.GB10443@mdounin.ru> References: <20130501174230.GB10443@mdounin.ru> Message-ID: Thank you for your prompt reply. just one more ... If I have 5 servers on the upstream, use the IP_HASH, is the load distribution based on a round robin when requests would go to the first on the list. then the second, the third, .... Or does NGIX create some kind of distribution array. Thank you in advance. Luis Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238770,238794#msg-238794 From nginx-forum at nginx.us Thu May 2 15:53:12 2013 From: nginx-forum at nginx.us (flarik) Date: Thu, 02 May 2013 11:53:12 -0400 Subject: understanding break and add_header combo Message-ID: <081c54f1773a6d3caf792a54f08f6037.NginxMailingListEnglish@forum.nginx.org> Hello, I'm trying to undestand the break; statement in combination with add_header combo, on the wiki it says the following: "Completes the current set of rules. Continue processing within the current location block but do not process any more rewrite directives." I read this as everything that has todo with the HttpRewriteModule will not be processed any more after this statement. I have the following piece of config for a rails app: server { listen 80; server_name x; root /var/www/vhost/x/current/public; passenger_enabled on; location ~ ^/assets/ { expires 1y; add_header Cache-Control public; # Some browsers still send conditional-GET requests if there's a # Last-Modified header or an ETag header even if they haven't # reached the expiry date sent in the Expires header. add_header Last-Modified ""; add_header ETag ""; break; } # Webfonts (they are in /assets) location ~* \.(ttf|ttc|otf|eot|woff)$ { add_header Access-Control-Allow-Methods GET,OPTIONS; add_header Access-Control-Allow-Headers *; add_header Access-Control-Allow-Origin *; } } Now the add_header stuff for webfonts is never set, and I do not understand why break has a part in this. When I remove break; it works, when i put the font location stuff before location ~^/assets/ it also works. Hope someone can shine some light on this. Kind Regards, Frodo Larik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238797,238797#msg-238797 From paulnpace at gmail.com Thu May 2 20:05:29 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Thu, 2 May 2013 13:05:29 -0700 Subject: No subject Message-ID: I am trying my first install of Mailman. I have a working Postfix/Dovecot/MySQL mail server running Ubuntu 12.04 and nginx stable 1.2.7. I am following the ngnix Mailman wiki article http://wiki.nginx.org/Mailman and the Ubuntu Official Documentation for setting up Mailman. https://help.ubuntu.com/12.04/serverguide/mailman.html I had to install thttpd from the deb file because starting with 12.04 it is not available in the repositories. Other than that, I tried to follow both guides to the letter. When I go to http://lists.example.com I get redirected to http://lists.example.com/mailman/listinfo (on Chrome and FF, but not IE) and I get a 400 Bad Request Request Header Or Cookie Too Large. Any ideas on where to start looking? From francis at daoine.org Thu May 2 20:21:41 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 2 May 2013 21:21:41 +0100 Subject: your mail In-Reply-To: References: Message-ID: <20130502202141.GH27406@craic.sysops.org> On Thu, May 02, 2013 at 01:05:29PM -0700, Paul N. Pace wrote: Hi there, > Other than that, I tried to follow both guides to the letter. When I > go to http://lists.example.com I get redirected to > http://lists.example.com/mailman/listinfo (on Chrome and FF, but not > IE) and I get a 400 Bad Request Request Header Or Cookie Too Large. Different redirection per client is unexpected. I'm guessing that the browser cache wasn't cleared? It's frequently simplest to test using "curl" to see exactly what response is sent. > Any ideas on where to start looking? Your nginx.conf almost certainly does a "proxy_pass" to the web server that actually runs mailman. I suggest you confirm that mailman is installed and working correctly on that web server -- if it isn't, nginx won't help. If the 400 error comes from nginx, there should be something in the logs to indicate the nature of the problem. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu May 2 20:30:43 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 2 May 2013 21:30:43 +0100 Subject: understanding break and add_header combo In-Reply-To: <081c54f1773a6d3caf792a54f08f6037.NginxMailingListEnglish@forum.nginx.org> References: <081c54f1773a6d3caf792a54f08f6037.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130502203043.GI27406@craic.sysops.org> On Thu, May 02, 2013 at 11:53:12AM -0400, flarik wrote: Hi there, > location ~ ^/assets/ { > location ~* \.(ttf|ttc|otf|eot|woff)$ { > Now the add_header stuff for webfonts is never set, and I do not understand > why break has a part in this. When I remove break; it works, > when i put the font location stuff before location ~^/assets/ it also > works. What request do you make? What response do you get? What response do you expect? In nginx, one request is handled in one location. So I expect that no more than one set of add_header directives will apply. And as you only show regex locations, if the first one matches then it is the one that is used. f -- Francis Daly francis at daoine.org From paulnpace at gmail.com Thu May 2 20:57:47 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Thu, 2 May 2013 13:57:47 -0700 Subject: your mail In-Reply-To: <20130502202141.GH27406@craic.sysops.org> References: <20130502202141.GH27406@craic.sysops.org> Message-ID: On Thu, May 2, 2013 at 1:21 PM, Francis Daly wrote: > On Thu, May 02, 2013 at 01:05:29PM -0700, Paul N. Pace wrote: > > Hi there, > >> Other than that, I tried to follow both guides to the letter. When I >> go to http://lists.example.com I get redirected to >> http://lists.example.com/mailman/listinfo (on Chrome and FF, but not >> IE) and I get a 400 Bad Request Request Header Or Cookie Too Large. > > Different redirection per client is unexpected. I'm guessing that the > browser cache wasn't cleared? It's frequently simplest to test using > "curl" to see exactly what response is sent. I did try clearing cache and cookies as well as opening the site on a device that had never opened it (my BlackBerry) and received the same error. Curl just states "moved permanently" as per the changes put in the sites-available file (see below). > >> Any ideas on where to start looking? > > Your nginx.conf almost certainly does a "proxy_pass" to the web server > that actually runs mailman. > > I suggest you confirm that mailman is installed and working correctly > on that web server -- if it isn't, nginx won't help. How to do this other than viewing the mailman page? > If the 400 error comes from nginx, there should be something in the logs > to indicate the nature of the problem. Strangely, the logs do not state any errors. This is the server block I added to sites-available file (mostly) as per the nginx wiki. Was I supposed to add this to the nginx.conf file? server { listen [::]:80; server_name lists.example.com; root /usr/lib; access_log /var/www/example.com/logs/access.log; error_log /var/www/example.com/logs/error.log; location = / { rewrite ^ /mailman/listinfo permanent; } location / { rewrite ^ /mailman$uri?$args; } location = /mailman/ { rewrite ^ /mailman/listinfo permanent; } location /mailman/ { include proxy_params; proxy_pass http://127.0.0.1/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /cgi-bin { rewrite ^/cgi-bin(.*)$ $1 permanent; } location /images/mailman { alias /var/lib/mailman/icons; } location /pipermail { alias /var/lib/mailman/archives/public; autoindex on; } } > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu May 2 21:26:58 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 2 May 2013 22:26:58 +0100 Subject: your mail In-Reply-To: References: <20130502202141.GH27406@craic.sysops.org> Message-ID: <20130502212658.GJ27406@craic.sysops.org> On Thu, May 02, 2013 at 01:57:47PM -0700, Paul N. Pace wrote: > On Thu, May 2, 2013 at 1:21 PM, Francis Daly wrote: > > On Thu, May 02, 2013 at 01:05:29PM -0700, Paul N. Pace wrote: Hi there, > >> Other than that, I tried to follow both guides to the letter. When I > >> go to http://lists.example.com I get redirected to > >> http://lists.example.com/mailman/listinfo (on Chrome and FF, but not > >> IE) and I get a 400 Bad Request Request Header Or Cookie Too Large. > > > > Different redirection per client is unexpected. I'm guessing that the > > browser cache wasn't cleared? It's frequently simplest to test using > > "curl" to see exactly what response is sent. > > I did try clearing cache and cookies as well as opening the site on a > device that had never opened it (my BlackBerry) and received the same > error. Chrome, FF, and Blackberry get a redirect, and IE gets some other unspecified response? nginx access log and error log should indicate why that difference is there. > Curl just states "moved permanently" as per the changes put in the > sites-available file (see below). "curl -i" will show the actual http response including headers; that is probably most useful. (And then make a new request to the redirect location with whatever cookie headers are indicated, and repeat until you see the failure.) > >> Any ideas on where to start looking? > > > > Your nginx.conf almost certainly does a "proxy_pass" to the web server > > that actually runs mailman. > > > > I suggest you confirm that mailman is installed and working correctly > > on that web server -- if it isn't, nginx won't help. > > How to do this other than viewing the mailman page? View the mailman page directly, without going through nginx. Either go to 127.0.0.1, or get the other web server to listen on an accessible ip:port and use that. > > If the 400 error comes from nginx, there should be something in the logs > > to indicate the nature of the problem. > > Strangely, the logs do not state any errors. That suggests that you may not be talking to this nginx. Be aware, though, that a 400 error (Bad Request) may be logged to the file appropriate at http{} level, not at server{} level, since a bad request may not allow the correct server to be determined. > This is the server block > I added to sites-available file (mostly) as per the nginx wiki. Was I > supposed to add this to the nginx.conf file? The wiki page I see doesn't mention sites-available. When nginx starts it reads one file, usually nginx.conf. That file may use the "include" directive to cause other files to be read as if they appeared directly in the one file. If you don't know whether this server{} block is being used for this request, you could try adding something like location = /test/ { return 200 "this is it"; } and then do "curl -i http://lists.example.com/test/" and see what comes back. > location = / { > rewrite ^ /mailman/listinfo permanent; > } So: when you access http://lists.example.com/, you expect a redirect to http://lists.example.com/mailman/listinfo. > location /mailman/ { > include proxy_params; > proxy_pass http://127.0.0.1/; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > } And when you access http://lists.example.com/mailman/listinfo, you expect whatever the other web server returns. But the other web server is the one listening on 127.0.0.1:80. Does "netstat -an | grep LISTEN | grep 80" show that that is there and working? I thought your nginx was bound to port 80? f -- Francis Daly francis at daoine.org From cloos at jhcloos.com Fri May 3 06:06:55 2013 From: cloos at jhcloos.com (James Cloos) Date: Fri, 03 May 2013 02:06:55 -0400 Subject: Nginx 1.4 problem In-Reply-To: (Alder Network's message of "Wed, 1 May 2013 14:25:03 -0700") References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: >>>>> "AN" == Alder Network writes: AN> I tried AN> listen [::]:80 ipv6only=off; Although [::]:80 ipv6only=off; does work as advertized (including for localhost sockets), [::1]:80 ipv6only=off; fails to respond to v4 connections. -JimC -- James Cloos OpenPGP: 1024D/ED7DAEA6 From luky-37 at hotmail.com Fri May 3 08:00:35 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 3 May 2013 10:00:35 +0200 Subject: Nginx 1.4 problem In-Reply-To: References: , <20130501174009.GA10443@mdounin.ru>, , , <20130501210759.GC10443@mdounin.ru>, , , Message-ID: > Although [::]:80 ipv6only=off; does work as advertized (including for > localhost sockets), [::1]:80 ipv6only=off; fails to respond to v4 > connections. Which is expected, since ::1 is an ipv6 address. Lukas From nginx-forum at nginx.us Fri May 3 08:00:45 2013 From: nginx-forum at nginx.us (flarik) Date: Fri, 03 May 2013 04:00:45 -0400 Subject: understanding break and add_header combo In-Reply-To: <20130502203043.GI27406@craic.sysops.org> References: <20130502203043.GI27406@craic.sysops.org> Message-ID: <8ecbbf659a365467aa17a46c6a5b9b60.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- Hello Francis, thanks for your response. > In nginx, one request is handled in one location. So I expect that no more > than one set of add_header directives will apply. And as you only show > regex locations, if the first one matches then it is the one that is used. Ah, that makes sense, thanks! Regards, Frodo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238797,238808#msg-238808 From rkearsley at blueyonder.co.uk Fri May 3 11:41:02 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Fri, 03 May 2013 12:41:02 +0100 Subject: proxy pass keepalive without upstream module Message-ID: <5183A24E.8070302@blueyonder.co.uk> Hi I read here that keepalives to backend can be enabled with the upstream module (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) But can they be used without defining an upstream block? Just a simple proxy_pass as the backend is a variable in my case: 'proxy_pass $proxy_to;' Many thanks From nginx-forum at nginx.us Fri May 3 20:03:51 2013 From: nginx-forum at nginx.us (Alexey Koscheev) Date: Fri, 03 May 2013 16:03:51 -0400 Subject: OCSP response: no response sent In-Reply-To: <20121005111033.GM40452@mdounin.ru> References: <20121005111033.GM40452@mdounin.ru> Message-ID: Hi! > > Additionally, while looking into this I've found that due to > > OpenSSL bug the OCSP stapling won't work at all if it's not > > enabled in the default server. Please, add this to the documentation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,231363,238814#msg-238814 From nginx-forum at nginx.us Fri May 3 20:17:17 2013 From: nginx-forum at nginx.us (nano) Date: Fri, 03 May 2013 16:17:17 -0400 Subject: [crit] 16665#0 unlink() Message-ID: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using nginx 1.4.0 to proxy a website, and I cache responses. I haven't noticed any problems on the front end, but the error log has unlink() errors. 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/8/9f/42da8f2662887b05cbb46fd5c9dac9f8" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/8/7d/f16e1a9cee13b3a9852fff331491d7d8" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/d/96/2b1e341ee2ccd315643dcad397b9796d" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/6/87/c3324c5f79272b6fff64ac19be2d0876" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/c/ae/aa5ee91c36f7ab931251dd125a200aec" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/c/d8/2ac585aa18ec25e3a8eab19b096dcd8c" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/2/94/77170f4b850dcc5bae0e93bdf0f07942" failed (2: No such file or directory) 2013/05/03 12:53:42 [crit] 16665#0: unlink() "/usr/local/nginx/cache/3/cd/f92020ab245f9be3bab04cf8bf93acd3" failed (2: No such file or directory) The list goes on. Is this something to be concerned of? My configuration is psuedo but here is the main parts: #reverse ssl (usage not shown in examples) proxy_cache_path /usr/local/nginx/cache levels=1:2 keys_zone=static-files:10m inactive=24h max_size=1g; #main site cache proxy_cache_path /usr/local/nginx/cache levels=1:2 keys_zone=page-cache:10m inactive=24h max_size=1g; #main site location / { proxy_http_version 1.0; proxy_set_header Accept-Encoding ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host *********; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; proxy_cache page-cache; proxy_cache_key $scheme$proxy_host$request_uri; proxy_cache_valid 200 30m; proxy_cache_valid 404 1m; proxy_intercept_errors on; proxy_cache_use_stale error timeout invalid_header updating http_502 http_500 http_503 http_504; add_header X-Cache $upstream_cache_status; proxy_pass *********; } #sub domain location / { proxy_http_version 1.0; proxy_set_header Accept-Encoding ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host *********; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; proxy_cache page-cache; proxy_cache_key $scheme$proxy_host$request_uri; proxy_cache_valid 200 30m; proxy_cache_valid 404 1m; proxy_intercept_errors on; proxy_cache_use_stale error timeout invalid_header updating http_502 http_500 http_503 http_504; add_header X-Cache $upstream_cache_status; proxy_pass *********; } Is this bad practice to share caches among subdomains? Is sharing the cache the reason why I'm getting unlink() errors? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238815,238815#msg-238815 From mdounin at mdounin.ru Fri May 3 22:01:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 May 2013 02:01:11 +0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130503220111.GB69760@mdounin.ru> Hello! On Fri, May 03, 2013 at 04:17:17PM -0400, nano wrote: > Hello, > > I'm using nginx 1.4.0 to proxy a website, and I cache responses. I haven't > noticed any problems on the front end, but the error log has unlink() > errors. [...] > The list goes on. > > Is this something to be concerned of? > > My configuration is psuedo but here is the main parts: > > #reverse ssl (usage not shown in examples) > proxy_cache_path /usr/local/nginx/cache levels=1:2 > keys_zone=static-files:10m inactive=24h max_size=1g; > > #main site cache > proxy_cache_path /usr/local/nginx/cache levels=1:2 > keys_zone=page-cache:10m inactive=24h max_size=1g; You've configured two distinct caches to use single directory. This is not how it's expected to work. You should use distinct directories for each cache you configure. If you want different locations to use the same cache - just use the same cache in the proxy_cache directive. [...] > Is this bad practice to share caches among subdomains? Is sharing the cache > the reason why I'm getting unlink() errors? It's ok to use the same cache for different locations/servers. But it's really bad idea to configure multiple caches in the same directory, and this is what causes your problems. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri May 3 22:37:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 May 2013 02:37:37 +0400 Subject: proxy pass keepalive without upstream module In-Reply-To: <5183A24E.8070302@blueyonder.co.uk> References: <5183A24E.8070302@blueyonder.co.uk> Message-ID: <20130503223737.GE69760@mdounin.ru> Hello! On Fri, May 03, 2013 at 12:41:02PM +0100, Richard Kearsley wrote: > Hi > I read here that keepalives to backend can be enabled with the > upstream module (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) > But can they be used without defining an upstream block? Just a > simple proxy_pass as the backend is a variable in my case: > 'proxy_pass $proxy_to;' No. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Fri May 3 23:49:49 2013 From: nginx-forum at nginx.us (nano) Date: Fri, 03 May 2013 19:49:49 -0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <20130503220111.GB69760@mdounin.ru> References: <20130503220111.GB69760@mdounin.ru> Message-ID: <5886bed74540b4619e1fcde40b47fc3d.NginxMailingListEnglish@forum.nginx.org> Thank you for such a quick reply Maxim! You solved my problem, thank you very much. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238815,238824#msg-238824 From nginx-forum at nginx.us Sat May 4 01:44:14 2013 From: nginx-forum at nginx.us (zakaria) Date: Fri, 03 May 2013 21:44:14 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise Message-ID: <0736f2c4c46e18d01f6c02298ce2eafc.NginxMailingListEnglish@forum.nginx.org> Hi, I must be missing something obvious here. I rerun my ubuntu configuration script and suddenly my nginx setup not working correctly anymore. Here my relevant nginx config: ---------------------------------------------------------------------- location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; try_files $fastcgi_script_name =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } ---------------------------------------------------------------------- And I test it with /test.php ----------------------------------------------------------------------
---------------------------------------------------------------------- When I request http://lemp.test/test.php/foo/bar.php the result is: ---------------------------------------------------------------------- array ( 'USER' => 'www-data', 'HOME' => '/var/www', 'FCGI_ROLE' => 'RESPONDER', 'PATH_INFO' => '', 'QUERY_STRING' => '', 'REQUEST_METHOD' => 'GET', 'CONTENT_TYPE' => '', 'CONTENT_LENGTH' => '', 'SCRIPT_FILENAME' => '/var/www/test.php', 'SCRIPT_NAME' => '/test.php', 'REQUEST_URI' => '/test.php/foo/bar.php', 'DOCUMENT_URI' => '/test.php', 'DOCUMENT_ROOT' => '/var/www', 'SERVER_PROTOCOL' => 'HTTP/1.1', 'GATEWAY_INTERFACE' => 'CGI/1.1', 'SERVER_SOFTWARE' => 'nginx/1.4.0', 'REMOTE_ADDR' => '192.168.56.1', 'REMOTE_PORT' => '59200', 'SERVER_ADDR' => '192.168.56.3', 'SERVER_PORT' => '80', 'SERVER_NAME' => '', 'HTTPS' => '', 'REDIRECT_STATUS' => '200', 'HTTP_HOST' => 'lemp.test', 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) Gecko/20100101 Firefox/20.0', 'HTTP_ACCEPT' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', 'HTTP_CONNECTION' => 'keep-alive', 'HTTP_CACHE_CONTROL' => 'max-age=0', 'PHP_SELF' => '/test.php', 'REQUEST_TIME' => 1367630910, ) ---------------------------------------------------------------------- What did I miss? Sincerely yours, -- Zakaria Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238825#msg-238825 From steve at greengecko.co.nz Sat May 4 01:53:58 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sat, 04 May 2013 13:53:58 +1200 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <0736f2c4c46e18d01f6c02298ce2eafc.NginxMailingListEnglish@forum.nginx.org> References: <0736f2c4c46e18d01f6c02298ce2eafc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1367632438.25673.1517.camel@steve-new> Have you added cgi.fix_pathinfo=0 into /etc/php5/fpm/php.ini and restarted php? Steve On Fri, 2013-05-03 at 21:44 -0400, zakaria wrote: > Hi, > > I must be missing something obvious here. > I rerun my ubuntu configuration script and suddenly my nginx setup not > working correctly anymore. > > Here my relevant nginx config: > ---------------------------------------------------------------------- > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > fastcgi_param PATH_INFO $fastcgi_path_info; > try_files $fastcgi_script_name =404; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > } > ---------------------------------------------------------------------- > > And I test it with /test.php > ---------------------------------------------------------------------- >
> ---------------------------------------------------------------------- > > When I request http://lemp.test/test.php/foo/bar.php the result is: > ---------------------------------------------------------------------- > array ( > 'USER' => 'www-data', > 'HOME' => '/var/www', > 'FCGI_ROLE' => 'RESPONDER', > 'PATH_INFO' => '', > 'QUERY_STRING' => '', > 'REQUEST_METHOD' => 'GET', > 'CONTENT_TYPE' => '', > 'CONTENT_LENGTH' => '', > 'SCRIPT_FILENAME' => '/var/www/test.php', > 'SCRIPT_NAME' => '/test.php', > 'REQUEST_URI' => '/test.php/foo/bar.php', > 'DOCUMENT_URI' => '/test.php', > 'DOCUMENT_ROOT' => '/var/www', > 'SERVER_PROTOCOL' => 'HTTP/1.1', > 'GATEWAY_INTERFACE' => 'CGI/1.1', > 'SERVER_SOFTWARE' => 'nginx/1.4.0', > 'REMOTE_ADDR' => '192.168.56.1', > 'REMOTE_PORT' => '59200', > 'SERVER_ADDR' => '192.168.56.3', > 'SERVER_PORT' => '80', > 'SERVER_NAME' => '', > 'HTTPS' => '', > 'REDIRECT_STATUS' => '200', > 'HTTP_HOST' => 'lemp.test', > 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) > Gecko/20100101 Firefox/20.0', > 'HTTP_ACCEPT' => > 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', > 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', > 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', > 'HTTP_CONNECTION' => 'keep-alive', > 'HTTP_CACHE_CONTROL' => 'max-age=0', > 'PHP_SELF' => '/test.php', > 'REQUEST_TIME' => 1367630910, > ) > ---------------------------------------------------------------------- > > What did I miss? > > Sincerely yours, > > > -- Zakaria > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238825#msg-238825 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa From francis at daoine.org Sat May 4 11:56:56 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 4 May 2013 12:56:56 +0100 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <0736f2c4c46e18d01f6c02298ce2eafc.NginxMailingListEnglish@forum.nginx.org> References: <0736f2c4c46e18d01f6c02298ce2eafc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130504115656.GK27406@craic.sysops.org> On Fri, May 03, 2013 at 09:44:14PM -0400, zakaria wrote: Hi there, > I must be missing something obvious here. > I rerun my ubuntu configuration script and suddenly my nginx setup not > working correctly anymore. What output do you expect? And if it not obvious: how does that differ from this output? I *think* you're reporting that PATH_INFO is unexpectedly empty in $_SERVER, in which case adding "fastcgi_param TEST_PATH_INFO $fastcgi_path_info;" and retrying might give a hint as to where the problem is. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat May 4 12:42:52 2013 From: nginx-forum at nginx.us (nevernet) Date: Sat, 04 May 2013 08:42:52 -0400 Subject: how to disable nginx internal dns cache? Message-ID: my network structure is: proxy server 1(nginx)-->proxy server2(nginx)-->web server proxy server2 has dynamic ip address. the ip address will be changed untime. so i use one domain for it: p2.domain.com. but proxy server1(nginx) has been cached the ip of domain: p2.domain.com. so when the new ip was assigned to p2.domain.com, but proxy server1(nginx) still has old ip address. i have searched on google, somebody said that nginx will cache the dns entry for 300 seconds(5minutes), but after 5 minutes ,event after 5 hours , the proxy server1(nginx) still has old ip address, please help me, any ideas will be appreciated. thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238835,238835#msg-238835 From francis at daoine.org Sat May 4 13:03:50 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 4 May 2013 14:03:50 +0100 Subject: how to disable nginx internal dns cache? In-Reply-To: References: Message-ID: <20130504130350.GL27406@craic.sysops.org> On Sat, May 04, 2013 at 08:42:52AM -0400, nevernet wrote: Hi there, > my network structure is: > proxy server 1(nginx)-->proxy server2(nginx)-->web server > > proxy server2 has dynamic ip address. the ip address will be changed > untime. so i use one domain for it: p2.domain.com. Untested, but http://nginx.org/r/proxy_pass suggests that you want a resolver defined, and a variable for the server name to connect to. Does that match your configuration? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat May 4 14:20:32 2013 From: nginx-forum at nginx.us (mschipperheyn) Date: Sat, 04 May 2013 10:20:32 -0400 Subject: redirect loop with try_files Message-ID: <515b3fe71cca963fec522f6850add2ab.NginxMailingListEnglish@forum.nginx.org> Hi, I have a front end nginx webserver with behind it a tomcat server. I ran into a nasty little redirect loop issue that I still don't understand. I'm trying to redirect users who have a session cookie to a different page than the other users I have a configuration such as this: ## Rewrite index requests rewrite ^(.*)/index.(.*)$ $1/ permanent; map $cookie_msa_country $ctry { default 0; NL "nederland/NL_"; BR "brasil/BR_"; } map $cookie_msa_lng $lng { default "nl"; nl "nl"; pt "pt"; es "es"; en "en"; } location = / { set $red 0; if ($http_cookie ~* "JSESSIONID"){ set $red 1; } if ($ctry = 0){ set $red 0; } if ($red = 1){ rewrite ^(.*)$ http://$host/$ctry$lng/account/wall/; } try_files /notthere.html @proxy; } When a users types in domain www.site.com, he ends up in an endless loop. I tried various things but without luck. Any suggestions how to avoid this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238837,238837#msg-238837 From nginx-forum at nginx.us Sat May 4 14:30:23 2013 From: nginx-forum at nginx.us (zakaria) Date: Sat, 04 May 2013 10:30:23 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <1367632438.25673.1517.camel@steve-new> References: <1367632438.25673.1517.camel@steve-new> Message-ID: <35fa46e821a9cb01a368248650c87966.NginxMailingListEnglish@forum.nginx.org> Not much different, now even PHP_SELF not set! ---------------------------------------------------------------------- array ( 'USER' => 'www-data', 'HOME' => '/var/www', 'FCGI_ROLE' => 'RESPONDER', 'PATH_INFO' => '', 'QUERY_STRING' => '', 'REQUEST_METHOD' => 'GET', 'CONTENT_TYPE' => '', 'CONTENT_LENGTH' => '', 'SCRIPT_FILENAME' => '/var/www/test.php', 'SCRIPT_NAME' => '/test.php', 'REQUEST_URI' => '/test.php/foo/bar.php', 'DOCUMENT_URI' => '/test.php', 'DOCUMENT_ROOT' => '/var/www', 'SERVER_PROTOCOL' => 'HTTP/1.1', 'GATEWAY_INTERFACE' => 'CGI/1.1', 'SERVER_SOFTWARE' => 'nginx/1.4.0', 'REMOTE_ADDR' => '192.168.56.1', 'REMOTE_PORT' => '55961', 'SERVER_ADDR' => '192.168.56.3', 'SERVER_PORT' => '80', 'SERVER_NAME' => '', 'HTTPS' => '', 'REDIRECT_STATUS' => '200', 'HTTP_HOST' => 'lemp.test', 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) Gecko/20100101 Firefox/20.0', 'HTTP_ACCEPT' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', 'HTTP_CONNECTION' => 'keep-alive', 'HTTP_CACHE_CONTROL' => 'max-age=0', 'PHP_SELF' => '', 'REQUEST_TIME' => 1367637739, ) ---------------------------------------------------------------------- GreenGecko Wrote: ------------------------------------------------------- > Have you added > > cgi.fix_pathinfo=0 > > into /etc/php5/fpm/php.ini and restarted php? > > Steve > > On Fri, 2013-05-03 at 21:44 -0400, zakaria wrote: > > Hi, > > > > I must be missing something obvious here. > > I rerun my ubuntu configuration script and suddenly my nginx setup > not > > working correctly anymore. > > > > Here my relevant nginx config: > > > ---------------------------------------------------------------------- > > location ~ [^/]\.php(/|$) { > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > fastcgi_param PATH_INFO $fastcgi_path_info; > > try_files $fastcgi_script_name =404; > > > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > fastcgi_index index.php; > > include fastcgi_params; > > } > > > ---------------------------------------------------------------------- > > > > And I test it with /test.php > > > ---------------------------------------------------------------------- > >
> > > ---------------------------------------------------------------------- > > > > When I request http://lemp.test/test.php/foo/bar.php the result is: > > > ---------------------------------------------------------------------- > > array ( > > 'USER' => 'www-data', > > 'HOME' => '/var/www', > > 'FCGI_ROLE' => 'RESPONDER', > > 'PATH_INFO' => '', > > 'QUERY_STRING' => '', > > 'REQUEST_METHOD' => 'GET', > > 'CONTENT_TYPE' => '', > > 'CONTENT_LENGTH' => '', > > 'SCRIPT_FILENAME' => '/var/www/test.php', > > 'SCRIPT_NAME' => '/test.php', > > 'REQUEST_URI' => '/test.php/foo/bar.php', > > 'DOCUMENT_URI' => '/test.php', > > 'DOCUMENT_ROOT' => '/var/www', > > 'SERVER_PROTOCOL' => 'HTTP/1.1', > > 'GATEWAY_INTERFACE' => 'CGI/1.1', > > 'SERVER_SOFTWARE' => 'nginx/1.4.0', > > 'REMOTE_ADDR' => '192.168.56.1', > > 'REMOTE_PORT' => '59200', > > 'SERVER_ADDR' => '192.168.56.3', > > 'SERVER_PORT' => '80', > > 'SERVER_NAME' => '', > > 'HTTPS' => '', > > 'REDIRECT_STATUS' => '200', > > 'HTTP_HOST' => 'lemp.test', > > 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; > rv:20.0) > > Gecko/20100101 Firefox/20.0', > > 'HTTP_ACCEPT' => > > 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', > > 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', > > 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', > > 'HTTP_CONNECTION' => 'keep-alive', > > 'HTTP_CACHE_CONTROL' => 'max-age=0', > > 'PHP_SELF' => '/test.php', > > 'REQUEST_TIME' => 1367630910, > > ) > > > ---------------------------------------------------------------------- > > > > What did I miss? > > > > Sincerely yours, > > > > > > -- Zakaria > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Skype: sholdowa Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238827#msg-238827 From nginx-forum at nginx.us Sat May 4 14:54:21 2013 From: nginx-forum at nginx.us (nateless) Date: Sat, 04 May 2013 10:54:21 -0400 Subject: file upload with php-fpm Message-ID: Hello, I've a problem with file upload and googling and checking with everything didn't help to solve it. The problem is when file is being uploaded larger than few megs browser repeats request after about 30 seconds, and on the second request after another 30-40s it throws connection reset and nginx has 408 error on that post request, no other errors in nginx nor in php. related settings to fiel upload are: client_body_buffer_size 1m; client_header_buffer_size 128k; client_max_body_size 1000m; client_body_timeout 500; client_header_timeout 500; keepalive_timeout 500 500; send_timeout 500; keepalive_requests 100; tcp_nodelay on; reset_timedout_connection on; server works with php (php-fpm) with socket. standart configuration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238838,238838#msg-238838 From nginx-forum at nginx.us Sat May 4 19:54:25 2013 From: nginx-forum at nginx.us (Sylvia) Date: Sat, 04 May 2013 15:54:25 -0400 Subject: file upload with php-fpm In-Reply-To: References: Message-ID: hi. nginx.conf: client_max_body_size 42m; php.ini: memory_limit = 64M post_max_size = 40M upload_max_filesize = 32M it works fine for me with that settings. I havent speficied any timeout settings you used. Have you edited php.ini for php-fpm? If not - default upload file size is 2M Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238838,238840#msg-238840 From nginx-forum at nginx.us Sat May 4 20:49:28 2013 From: nginx-forum at nginx.us (nateless) Date: Sat, 04 May 2013 16:49:28 -0400 Subject: file upload with php-fpm In-Reply-To: References: Message-ID: <5a6d2fd38ac2759104f4b85b397ce1aa.NginxMailingListEnglish@forum.nginx.org> Of course I set correct php settings settings: memory_limit => 512M post_max_size => 1000M upload_max_filesize => 1000M Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238838,238841#msg-238841 From jaderhs5 at gmail.com Sat May 4 20:55:22 2013 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Sat, 4 May 2013 17:55:22 -0300 Subject: status 009 on post action handler after client connection abort Message-ID: Hello, i've notice nginx set $status as 009 on client connection aborted in a reverse proxy configuration. Is this the correct behavior? Shouldn't it be 499? thanks in advance Jader H. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat May 4 20:55:28 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 4 May 2013 21:55:28 +0100 Subject: redirect loop with try_files In-Reply-To: <515b3fe71cca963fec522f6850add2ab.NginxMailingListEnglish@forum.nginx.org> References: <515b3fe71cca963fec522f6850add2ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130504205528.GM27406@craic.sysops.org> On Sat, May 04, 2013 at 10:20:32AM -0400, mschipperheyn wrote: Hi there, > ## Rewrite index requests > rewrite ^(.*)/index.(.*)$ $1/ permanent; That is likely to lead to a loop, unless you take great care elsewhere. (The typical defaults are: a request for /dir/ leads to an internal rewrite to /dir/index.html, which the above would convert to an external redirect to /dir/, which is where we came in.) > location = / { > set $red 0; > if ($http_cookie ~* "JSESSIONID"){ > set $red 1; > } Here, you use "if" inside "location" and you do something other than "return" or "rewrite ... last". That's rarely a good idea: http://wiki.nginx.org/IfIsEvil Can you move the if/set logic to server{} level, to avoid that likely confusion? > When a users types in domain www.site.com, he ends up in an endless loop. I strongly suspect that the loop is caused by the first rewrite, because the try_files does not apply because at least one of your "if" conditions is true. You can test that by trying "curl -i http://www.site.com/", and adding whatever is necessary to make sure that none of the "if" conditions match, and seeing if you get anything different. Or you can just redo the configuration to avoid most "if" inside "location" blocks. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat May 4 21:25:30 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 4 May 2013 22:25:30 +0100 Subject: proxy_pass only if a file exists In-Reply-To: <8f2c959228348af3f80d4af8db3599a4.NginxMailingListEnglish@forum.nginx.org> References: <8f2c959228348af3f80d4af8db3599a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130504212530.GN27406@craic.sysops.org> On Tue, Apr 30, 2013 at 10:25:34AM -0400, mrtn wrote: Hi there, > I need to make sure a file actually exists before proxy_pass-ing the request > to an upstream server. I don't serve existing files directly using Nginx > because there are some application-specific logic i need to perform on the > application server for such requests. I confess I don't fully understand that -- if file X exists on the local filesystem, then ignore it and proxy_pass to server Y; but if it doesn't, then don't proxy_pass and do something else instead, which is presumably better than caching the 404 from server Y. But that's ok; I don't have to understand it. > I've looked at try_files, but it seems like it will serve the file > straightaway once it is found, which is not what I want here. Correct. try_files serves the first file found, or else rewrites to the final argument. > Another way > is to use if (!-f $request_filename), but as mentioned here: > http://wiki.nginx.org/Pitfalls#Check_IF_File_Exists, it's > a terrible way to check the existence of a file. The entire purpose of "-f" is to check whether the thing named is a file. The common case is something like "if it is a file, serve it; else do something different", and now that try_files exists, "-f" is not the best way to achieve the common case. > Is there a feasible yet efficient way? It sounds like "-f" is what you want, perhaps following the example shown on http://wiki.nginx.org/IfIsEvil f -- Francis Daly francis at daoine.org From andrejaenisch at googlemail.com Sat May 4 21:38:04 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Sat, 4 May 2013 23:38:04 +0200 Subject: status 009 on post action handler after client connection abort In-Reply-To: References: Message-ID: 2013/5/4 Jader H. Silva : > Hello, i've notice nginx set $status as 009 on client connection aborted in a reverse proxy configuration. > Is this the correct behavior? Shouldn't it be 499? > > thanks in advance > Jader H. Sounds somewhat like http://forum.nginx.org/read.php?2,238267,238267#msg-238267 From francis at daoine.org Sat May 4 21:48:34 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 4 May 2013 22:48:34 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: Message-ID: <20130504214834.GO27406@craic.sysops.org> On Mon, Apr 29, 2013 at 10:02:35PM +0100, henrique matias wrote: Hi there, > Am having trouble setting up my nginx.config to transparently proxy the > subdomains and domains to the same app, but with different "path > components" appended to the $uri Frequently, the main problem is that the back-end application makes it very hard to do this. I suggest you test first using a separate server{} block for one server_name and demonstrate to yourself that it can work. After that, you can worry about the details of how to auto-handle the extra domains. Something like (untested): server { server_name www.mydomain.it; location / { proxy_pass http://app_server/it/; } } maybe with "proxy_set_header Host www.mydomain.com;", or whatever your application needs. The important things to check are, do links in the returned content work when the browser asks for "/dir/" but the app_server gets a request for "/it/dir/"? The above is *almost* the same as what you have here: > This is my last unsuccessful attempt: http://pastebin.com/bZZA30zC but there's an extra "/" in the proxy_pass line; and as you've not said in what way yours was unsuccessful, it's hard to suggest a specific fix. Compare the output of "curl -i http://www.mydomain.com/it/SOMETHING" with the output of "curl -i http://www.mydomain.it/SOMETHING", and with what you expect the output to be. f -- Francis Daly francis at daoine.org From jim at ohlste.in Sat May 4 23:08:55 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Sat, 04 May 2013 19:08:55 -0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <20130503220111.GB69760@mdounin.ru> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> <20130503220111.GB69760@mdounin.ru> Message-ID: <51859507.1090408@ohlste.in> On 05/03/13 18:01, Maxim Dounin wrote: > Hello! > > > You've configured two distinct caches to use single directory. > This is not how it's expected to work. > > You should use distinct directories for each cache you configure. > If you want different locations to use the same cache - just use > the same cache in the proxy_cache directive. > > [...] > >> Is this bad practice to share caches among subdomains? Is sharing the cache >> the reason why I'm getting unlink() errors? > > It's ok to use the same cache for different locations/servers. > But it's really bad idea to configure multiple caches in the same > directory, and this is what causes your problems. > Maxim, I have just seen a similar situation using fastcgi cache. In my case I am using the same cache (but only one cache) for several server/location blocks. The system is a fairly basic nginx set up with four upstream fastcgi servers and ip hash. The returned content is cached locally by nginx. The cache is rather large but I wouldn't think this would be the cause. Relevant config: http { .... upstream fastcgi_backend { ip_hash; server 10.0.2.1:xxxx; server 10.0.2.2:xxxx; server 10.0.2.3:xxxx; server 10.0.2.4:xxxx; keepalive 8; } fastcgi_cache_path /var/nginx/fcgi_cache levels=1:2 keys_zone=one:512m max_size=250g inactive=24h; .... } server1 { .... server_name domain1.com; .... location ~ \.blah$ { fastcgi_pass fastcgi_backend; include /usr/local/etc/nginx/fastcgi_params; fastcgi_buffers 64 4k; fastcgi_read_timeout 120s; fastcgi_keep_conn on; fastcgi_send_timeout 120s; fastcgi_cache one; fastcgi_cache_key $scheme$request_method$host$request_uri; fastcgi_cache_lock on; fastcgi_cache_lock_timeout 5s; fastcgi_cache_methods GET HEAD; fastcgi_cache_min_uses 1; fastcgi_cache_use_stale error updating; fastcgi_cache_valid 200 302 60m; fastcgi_cache_valid 301 12h; fastcgi_cache_valid 404 5m; } .... } The other sever/location blocks are pretty much identical insofar as fastcgi and cache are concerned. When I upgraded nginx using the "on the fly" binary upgrade method, I saw almost 400,000 lines in the error log that looked like this: 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/899bc269a74afe6e0ad574eacde4e2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/42adc8a0136048b940c6fcaa76abf2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/c3656dff5aa91af1a44bd0157045d2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/de75207502d7892cf377a3113ea552e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/c2205e6a3df4f29eb2a568e435b2b2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/6ccaa4244645e508dad3d14ff73ea2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/76b4b811553756a2989ae40da863d2e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/53d40a6399ba6dcf08bc0a52623932e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/68ff8b00492991a2e3ba5ad7420d42e7" failed (2: No such file or directory) 2013/05/04 17:54:25 [crit] 65304#0: unlink() "/var/nginx/fcgi_cache/7/2e/19c079c9a1e0bcacb697af123d47f2e7" failed (2: No such file or directory) The backend logs show nothing of note. -- Jim Ohlstein From nginx-forum at nginx.us Sat May 4 23:41:43 2013 From: nginx-forum at nginx.us (zakaria) Date: Sat, 04 May 2013 19:41:43 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <20130504115656.GK27406@craic.sysops.org> References: <20130504115656.GK27406@craic.sysops.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Fri, May 03, 2013 at 09:44:14PM -0400, zakaria wrote: > Hi there, > What output do you expect? > And if it not obvious: how does that differ from this output? > I *think* you're reporting that PATH_INFO is unexpectedly empty > in $_SERVER, in which case adding "fastcgi_param TEST_PATH_INFO > $fastcgi_path_info;" and retrying might give a hint as to where the > problem is. > f > -- > Francis Daly francis at daoine.orgYes, that's what I mean. I'm sorry for being cryptic but this problem has me puzzled for two days. Let me tell the long story. I'm trying to setup a web server using nginx in ubuntu 12.04.2 (precise) Like any good sysadmin, I use bash script to setup everything. So I could replay it anytime. So on the friday I rerun the script (to enhanced it) and it didn't work like it used to. I swear, I got nginx working perfectly before with PATH_INFO and all. To answer your question. The PATH_INFO should output to '/foo/bar.php' Per your request here's my modified config ------------------------------------------------------------ location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param TEST_PATH_INFO $fastcgi_path_info; try_files $fastcgi_script_name =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } ------------------------------------------------------------ And the result with cgi.fix_pathinfo = 1 (the default) ------------------------------------------------------------ array ( 'USER' => 'www-data', 'HOME' => '/var/www', 'FCGI_ROLE' => 'RESPONDER', 'PATH_INFO' => '', 'TEST_PATH_INFO' => '', 'QUERY_STRING' => '', 'REQUEST_METHOD' => 'GET', 'CONTENT_TYPE' => '', 'CONTENT_LENGTH' => '', 'SCRIPT_FILENAME' => '/var/www/test.php', 'SCRIPT_NAME' => '/test.php', 'REQUEST_URI' => '/test.php/foo/bar.php', 'DOCUMENT_URI' => '/test.php', 'DOCUMENT_ROOT' => '/var/www', 'SERVER_PROTOCOL' => 'HTTP/1.1', 'GATEWAY_INTERFACE' => 'CGI/1.1', 'SERVER_SOFTWARE' => 'nginx/1.4.0', 'REMOTE_ADDR' => '192.168.56.1', 'REMOTE_PORT' => '33683', 'SERVER_ADDR' => '192.168.56.3', 'SERVER_PORT' => '80', 'SERVER_NAME' => '', 'HTTPS' => '', 'REDIRECT_STATUS' => '200', 'HTTP_HOST' => 'lemp.test', 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) Gecko/20100101 Firefox/20.0', 'HTTP_ACCEPT' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', 'HTTP_CONNECTION' => 'keep-alive', 'HTTP_CACHE_CONTROL' => 'max-age=0', 'PHP_SELF' => '/test.php', 'REQUEST_TIME' => 1367710298, ) ------------------------------------------------------------ So here's my request to you all: 1. Is my config correct? I'm sure it is. 2. Could you try it on your system and tell me whether the output differ from mine? 3. Is there something wrong on the latest ubuntu precise? Or is it just my imagination that I have it working before? :) Thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238849#msg-238849 From reallfqq-nginx at yahoo.fr Sun May 5 00:13:12 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 4 May 2013 20:13:12 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: References: <20130504115656.GK27406@craic.sysops.org> Message-ID: It seems that PATH_INFO is sensitive to points being used in URI... Check the PHP doc about $_SERVER: a very interesting example is being provided there. Ubuntu is much likely not the problem. Since you are a 'good sysadmin', you also tried to relate trouble to some recent update of packages if any. :o) If nothing changed in your setup or in your configuration, then it comes from bad/unreliable usage of unknown resources. Hope that I helped, --- *B. R.* On Sat, May 4, 2013 at 7:41 PM, zakaria wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Fri, May 03, 2013 at 09:44:14PM -0400, zakaria wrote: > > > Hi there, > > > What output do you expect? > > > And if it not obvious: how does that differ from this output? > > > I *think* you're reporting that PATH_INFO is unexpectedly empty > > in $_SERVER, in which case adding "fastcgi_param TEST_PATH_INFO > > $fastcgi_path_info;" and retrying might give a hint as to where the > > problem is. > > > f > > -- > > Francis Daly francis at daoine.orgYes, that's what I mean. > > I'm sorry for being cryptic but this problem has me puzzled for two days. > > Let me tell the long story. > I'm trying to setup a web server using nginx in ubuntu 12.04.2 (precise) > Like any good sysadmin, I use bash script to setup everything. > So I could replay it anytime. > > So on the friday I rerun the script (to enhanced it) and it didn't work > like > it used to. > I swear, I got nginx working perfectly before with PATH_INFO and all. > > To answer your question. The PATH_INFO should output to '/foo/bar.php' > > Per your request here's my modified config > ------------------------------------------------------------ > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param TEST_PATH_INFO $fastcgi_path_info; > try_files $fastcgi_script_name =404; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > } > ------------------------------------------------------------ > > And the result with cgi.fix_pathinfo = 1 (the default) > ------------------------------------------------------------ > array ( > 'USER' => 'www-data', > 'HOME' => '/var/www', > 'FCGI_ROLE' => 'RESPONDER', > 'PATH_INFO' => '', > 'TEST_PATH_INFO' => '', > 'QUERY_STRING' => '', > 'REQUEST_METHOD' => 'GET', > 'CONTENT_TYPE' => '', > 'CONTENT_LENGTH' => '', > 'SCRIPT_FILENAME' => '/var/www/test.php', > 'SCRIPT_NAME' => '/test.php', > 'REQUEST_URI' => '/test.php/foo/bar.php', > 'DOCUMENT_URI' => '/test.php', > 'DOCUMENT_ROOT' => '/var/www', > 'SERVER_PROTOCOL' => 'HTTP/1.1', > 'GATEWAY_INTERFACE' => 'CGI/1.1', > 'SERVER_SOFTWARE' => 'nginx/1.4.0', > 'REMOTE_ADDR' => '192.168.56.1', > 'REMOTE_PORT' => '33683', > 'SERVER_ADDR' => '192.168.56.3', > 'SERVER_PORT' => '80', > 'SERVER_NAME' => '', > 'HTTPS' => '', > 'REDIRECT_STATUS' => '200', > 'HTTP_HOST' => 'lemp.test', > 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) > Gecko/20100101 Firefox/20.0', > 'HTTP_ACCEPT' => > 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', > 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', > 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', > 'HTTP_CONNECTION' => 'keep-alive', > 'HTTP_CACHE_CONTROL' => 'max-age=0', > 'PHP_SELF' => '/test.php', > 'REQUEST_TIME' => 1367710298, > ) > ------------------------------------------------------------ > > So here's my request to you all: > 1. Is my config correct? I'm sure it is. > 2. Could you try it on your system and > tell me whether the output differ from mine? > 3. Is there something wrong on the latest ubuntu precise? > Or is it just my imagination that I have it working before? :) > > Thanks, > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238825,238849#msg-238849 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun May 5 01:02:25 2013 From: nginx-forum at nginx.us (zakaria) Date: Sat, 04 May 2013 21:02:25 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: References: Message-ID: <8ac1b8e6bfdd57b5a6b2e0fdf1f5cdba.NginxMailingListEnglish@forum.nginx.org> B.R. Wrote: ------------------------------------------------------- > It seems that PATH_INFO is sensitive to points being used in URI... > Check the PHP doc about $_SERVER > : > a very interesting example is being provided there. Yes, I knew that. If you read from the beginning of the thread, the request is from /test.php/foo/bar.php and I try it with another URI that should have PATH_INFO. > Ubuntu is much likely not the problem. > Since you are a 'good sysadmin', you also tried to relate trouble to some > recent update of packages if any. :o) This is fresh install, so it kind a hard to see whats updated than before. But thanks for the idea. > If nothing changed in your setup or in your configuration, then it > comes from bad/unreliable usage of unknown resources. I presume you find nothing wrong with my config? > Hope that I helped, > --- > *B. R.* Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238851#msg-238851 From hems.inlet at gmail.com Sun May 5 01:21:58 2013 From: hems.inlet at gmail.com (henrique matias) Date: Sun, 5 May 2013 02:21:58 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: <20130504214834.GO27406@craic.sysops.org> References: <20130504214834.GO27406@craic.sysops.org> Message-ID: My first try was to change my location / { } to proxy pass to another language, so i could try "the backend" as you said, but it actually didn't work, i got: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/nginx.conf:79 my nginx version is 1.3.15. will keep trying, if someone knows how to work this around, would be cool, i guess this happens quite often :P [s] On 4 May 2013 22:48, Francis Daly wrote: > On Mon, Apr 29, 2013 at 10:02:35PM +0100, henrique matias wrote: > > Hi there, > > > Am having trouble setting up my nginx.config to transparently proxy the > > subdomains and domains to the same app, but with different "path > > components" appended to the $uri > > Frequently, the main problem is that the back-end application makes it > very hard to do this. > > I suggest you test first using a separate server{} block for one > server_name and demonstrate to yourself that it can work. > > After that, you can worry about the details of how to auto-handle the > extra domains. > > Something like (untested): > > server { > server_name www.mydomain.it; > location / { > proxy_pass http://app_server/it/; > } > } > > maybe with "proxy_set_header Host www.mydomain.com;", or whatever your > application needs. > > The important things to check are, do links in the returned content work > when the browser asks for "/dir/" but the app_server gets a request for > "/it/dir/"? > > The above is *almost* the same as what you have here: > > > This is my last unsuccessful attempt: http://pastebin.com/bZZA30zC > > but there's an extra "/" in the proxy_pass line; and as you've not said > in what way yours was unsuccessful, it's hard to suggest a specific fix. > > Compare the output of "curl -i http://www.mydomain.com/it/SOMETHING" > with the output of "curl -i http://www.mydomain.it/SOMETHING", and with > what you expect the output to be. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hems.inlet at gmail.com Sun May 5 01:38:17 2013 From: hems.inlet at gmail.com (henrique matias) Date: Sun, 5 May 2013 02:38:17 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: <20130504214834.GO27406@craic.sysops.org> Message-ID: Also i tried adding the address to the try_files: try_files $uri $uri @app/de/; and try_files $uri $uri @app/de; but that didn't work either. The way i managed to provide translated content, was using a rewrite inside of my location block: rewrite ^(.*)$ /my-language/$1 break; That solves part of my problem. The core basic of my problem is "Rewrite the URL based on the "server name"". So far the only options i see on nginx are: 1. Have a configuration with one "if" and one "rewrite" in order to map server name to path 2. Multiple server declarations sharing the same configuration ( probably using some sort of include? ) What you reckon? Any suggestion ? On 5 May 2013 02:21, henrique matias wrote: > My first try was to change my location / { } to proxy pass to another > language, so i could try "the backend" as you said, but it actually didn't > work, i got: > > [emerg] "proxy_pass" cannot have URI part in location given by regular > expression, or inside named location, or inside "if" statement, or inside > "limit_except" block in /etc/nginx/nginx.conf:79 > > my nginx version is 1.3.15. > > will keep trying, if someone knows how to work this around, would be cool, > i guess this happens quite often :P > > [s] > > > > > On 4 May 2013 22:48, Francis Daly wrote: > >> On Mon, Apr 29, 2013 at 10:02:35PM +0100, henrique matias wrote: >> >> Hi there, >> >> > Am having trouble setting up my nginx.config to transparently proxy the >> > subdomains and domains to the same app, but with different "path >> > components" appended to the $uri >> >> Frequently, the main problem is that the back-end application makes it >> very hard to do this. >> >> I suggest you test first using a separate server{} block for one >> server_name and demonstrate to yourself that it can work. >> >> After that, you can worry about the details of how to auto-handle the >> extra domains. >> >> Something like (untested): >> >> server { >> server_name www.mydomain.it; >> location / { >> proxy_pass http://app_server/it/; >> } >> } >> >> maybe with "proxy_set_header Host www.mydomain.com;", or whatever your >> application needs. >> >> The important things to check are, do links in the returned content work >> when the browser asks for "/dir/" but the app_server gets a request for >> "/it/dir/"? >> >> The above is *almost* the same as what you have here: >> >> > This is my last unsuccessful attempt: http://pastebin.com/bZZA30zC >> >> but there's an extra "/" in the proxy_pass line; and as you've not said >> in what way yours was unsuccessful, it's hard to suggest a specific fix. >> >> Compare the output of "curl -i http://www.mydomain.com/it/SOMETHING" >> with the output of "curl -i http://www.mydomain.it/SOMETHING", and with >> what you expect the output to be. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hems.inlet at gmail.com Sun May 5 02:09:27 2013 From: hems.inlet at gmail.com (henrique matias) Date: Sun, 5 May 2013 03:09:27 +0100 Subject: Apache benchmark: always a few "super late" requests, why ? Message-ID: Perhaps no matter if i change number of workers, or worker connections, there's always some super late connections on my "ab" tests.. Am very new to benchmark, but the way am doing now is: ab -n 8000 -c 1000 http://address_to_a_plain_text_file ab -n 8000 -c 1000 http://address_to_a_rails_address_that_queries_the_database Almost all tests i do ends up with very good results up to 90% of the requests ( both, with plain text and with rails script ), but then sometimes in this last 10% there's very slow requests ( 10x more than the fastest ).. For instance : ab -n 8000 -c 1000 http://address_to_a_plain_text_file Percentage of the requests served within a certain time (ms) 50% 142 66% 152 75% 156 80% 158 90% 169 95% 181 98% 4180 99% 5236 100% 5485 (longest request) The ones that goes to the database sometimes get very slow compared to plain text files, so am guessing i should enlarge somehow the pipe betweens rails and the db. For instance : ab -n 8000 -c 1000 http://address_to_rails_touching_db_file Percentage of the requests served within a certain time (ms) 50% 10 66% 10 75% 10 80% 11 90% 12 95% 13 98% 16 99% 25 100% 13237 (longest request) Even my plain text files are suffering, so i would guess that is a problem in my worker_process / worker_connections and keepalive_timeout ? And for the file that touchs the database i would guess i should be tweaking some database configuration between my rails and mongodb... Any advices? -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun May 5 02:35:31 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 4 May 2013 22:35:31 -0400 Subject: Apache benchmark: always a few "super late" requests, why ? In-Reply-To: References: Message-ID: Why do wish so much that comes from Nginx? I would have a look on the network table (listening and connection sockets), on the load of the network, system (processes) suring the tests. But you said nothing of all that, just sending raw ab tests results which say pretty much nothing about anything... --- *B. R.* On Sat, May 4, 2013 at 10:09 PM, henrique matias wrote: > Perhaps no matter if i change number of workers, or worker connections, > there's always some super late connections on my "ab" tests.. > > Am very new to benchmark, but the way am doing now is: > > ab -n 8000 -c 1000 http://address_to_a_plain_text_file > > ab -n 8000 -c 1000 > http://address_to_a_rails_address_that_queries_the_database > > Almost all tests i do ends up with very good results up to 90% of the > requests ( both, with plain text and with rails script ), but then > sometimes in this last 10% there's very slow requests ( 10x more than the > fastest ).. > > For instance : ab -n 8000 -c 1000 http://address_to_a_plain_text_file > > Percentage of the requests served within a certain time (ms) > 50% 142 > 66% 152 > 75% 156 > 80% 158 > 90% 169 > 95% 181 > 98% 4180 > 99% 5236 > 100% 5485 (longest request) > > The ones that goes to the database sometimes get very slow compared to > plain text files, so am guessing i should enlarge somehow the pipe betweens > rails and the db. > > For instance : ab -n 8000 -c 1000 http://address_to_rails_touching_db_file > > Percentage of the requests served within a certain time (ms) > 50% 10 > 66% 10 > 75% 10 > 80% 11 > 90% 12 > 95% 13 > 98% 16 > 99% 25 > 100% 13237 (longest request) > > > Even my plain text files are suffering, so i would guess that is a problem > in my worker_process / worker_connections and keepalive_timeout ? > > > And for the file that touchs the database i would guess i should be > tweaking some database configuration between my rails and mongodb... > > > Any advices? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun May 5 07:58:06 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 5 May 2013 08:58:06 +0100 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: References: <20130504115656.GK27406@craic.sysops.org> Message-ID: <20130505075806.GP27406@craic.sysops.org> On Sat, May 04, 2013 at 07:41:43PM -0400, zakaria wrote: > Francis Daly Wrote: Hi there, > So on the friday I rerun the script (to enhanced it) and it didn't work like > it used to. > I swear, I got nginx working perfectly before with PATH_INFO and all. > > To answer your question. The PATH_INFO should output to '/foo/bar.php' > And the result with cgi.fix_pathinfo = 1 (the default) As an aside -- I find that "cgi.fix_pathinfo = 1" removes the "do what I say" part of php configuration, so I run without it. But I don't believe that that's relevant here. > 'PATH_INFO' => '', > 'TEST_PATH_INFO' => '', Thanks for testing that. It suggests to me that the problem is on the nginx side: $fastcgi_path_info is empty at the time the fastcgi_param directive takes effect. > So here's my request to you all: > 1. Is my config correct? I'm sure it is. It doesn't do what you want it to do, which is a strong hint in one direction ;-) But I don't see any reason why that should be the case. I do see two possible config changes you could make, each of which seems enough to get things working as you want. Either: remove the "try_files" line; or replace the "fastcgi_param PATH_INFO" line with two lines: fastcgi_param PATH_INFO $mypath; set $mypath $fastcgi_path_info; These seem to work because $fastcgi_path_info does have the correct value in the "rewrite" phase, but loses it after the "try files" phase. I don't understand why that is the case. That upsets me. > 2. Could you try it on your system and > tell me whether the output differ from mine? I get the same output, using both nginx 1.2.4 and 1.0.0. And either change "fixes" it on each. > 3. Is there something wrong on the latest ubuntu precise? > Or is it just my imagination that I have it working before? :) Were you perhaps previously using an older nginx version where it worked as expected? Or is the "try_files" line a new addition since Friday? f -- Francis Daly francis at daoine.org From francis at daoine.org Sun May 5 08:03:59 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 5 May 2013 09:03:59 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: <20130504214834.GO27406@craic.sysops.org> Message-ID: <20130505080359.GQ27406@craic.sysops.org> On Sun, May 05, 2013 at 02:21:58AM +0100, henrique matias wrote: > My first try was to change my location / { } to proxy pass to another > language, so i could try "the backend" as you said, but it actually didn't > work, i got: > > [emerg] "proxy_pass" cannot have URI part in location given by regular > expression, or inside named location, or inside "if" statement, or inside > "limit_except" block in /etc/nginx/nginx.conf:79 Unless I'm missing something, that configuration should not lead to that error message. The new test server{} block should be very small, with every line understood. If you show exactly what you did, what you got, and what you expected to get, others will have a better chance of offering help. f -- Francis Daly francis at daoine.org From francis at daoine.org Sun May 5 08:16:31 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 5 May 2013 09:16:31 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: <20130504214834.GO27406@craic.sysops.org> Message-ID: <20130505081631.GR27406@craic.sysops.org> On Sun, May 05, 2013 at 02:38:17AM +0100, henrique matias wrote: Hi there, > Also i tried adding the address to the try_files: > > try_files $uri $uri @app/de/; and try_files $uri $uri @app/de; > > but that didn't work either. Unless you also added new named locations like "@app/de/", I'd expect that to return HTTP 500. That's a more specific problem report than "didn't work". > The core basic of my problem is "Rewrite the URL based on the "server > name"". > > So far the only options i see on nginx are: > > 1. Have a configuration with one "if" and one "rewrite" in order to map > server name to path http://nginx.org/r/map Set a (possibly empty) variable called (say) $path_prefix, and then use that in your rewrite or proxy_pass line. Note that using a variable in proxy_pass has other requirements too: http://nginx.org/r/proxy_pass > 2. Multiple server declarations sharing the same configuration ( probably > using some sort of include? ) Can work, if the shared configuration is truly the same. > What you reckon? Any suggestion ? I still think that until you demonstrate that your back-end works with this, the rest is not useful. Test one thing at a time, and keep the configuration simple. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun May 5 11:04:21 2013 From: nginx-forum at nginx.us (zakaria) Date: Sun, 05 May 2013 07:04:21 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <20130505075806.GP27406@craic.sysops.org> References: <20130505075806.GP27406@craic.sysops.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > Hi there, > > And the result with cgi.fix_pathinfo = 1 (the default) > As an aside -- I find that "cgi.fix_pathinfo = 1" removes the "do what > I say" part of php configuration, so I run without it. But I don't > believe that that's relevant here. cgi.fix_pathinfo=0 gives incorrect PHP_SELF. > > 'PATH_INFO' => '', > > 'TEST_PATH_INFO' => '', > Thanks for testing that. It suggests to me that the problem is on the > nginx side: $fastcgi_path_info is empty at the time the fastcgi_param > directive takes effect. > > So here's my request to you all: > > 1. Is my config correct? I'm sure it is. > It doesn't do what you want it to do, which is a strong hint in one > direction ;-) > But I don't see any reason why that should be the case. > I do see two possible config changes you could make, each of which > seems > enough to get things working as you want. > Either: remove the "try_files" line; or replace the "fastcgi_param > PATH_INFO" line with two lines: > fastcgi_param PATH_INFO $mypath; > set $mypath $fastcgi_path_info; > These seem to work because $fastcgi_path_info does have the correct > value in the "rewrite" phase, but loses it after the "try files" > phase. I don't understand why that is the case. That upsets me. Thank you for confirm it. Its nginx bug #321 http://trac.nginx.org/nginx/ticket/321 For the benefit of the man from the future (http://xkcd.com/979/), here's my final configuration: ------------------------------------------------------------ location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; # Save the $fastcgi_path_info before try_files clear it set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; try_files $fastcgi_script_name =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } ------------------------------------------------------------ with /test.php: ------------------------------------------------------------
------------------------------------------------------------ and php.ini cgi.fix_pathinfo = 1 (the default). When given request http://lemp.test/test.php/foo/bar.php?v=1 would produce: ------------------------------------------------------------ array ( 'USER' => 'www-data', 'HOME' => '/var/www', 'FCGI_ROLE' => 'RESPONDER', 'PATH_INFO' => '/foo/bar.php', 'QUERY_STRING' => 'v=1', 'REQUEST_METHOD' => 'GET', 'CONTENT_TYPE' => '', 'CONTENT_LENGTH' => '', 'SCRIPT_FILENAME' => '/var/www/test.php', 'SCRIPT_NAME' => '/test.php', 'REQUEST_URI' => '/test.php/foo/bar.php?v=1', 'DOCUMENT_URI' => '/test.php', 'DOCUMENT_ROOT' => '/var/www', 'SERVER_PROTOCOL' => 'HTTP/1.1', 'GATEWAY_INTERFACE' => 'CGI/1.1', 'SERVER_SOFTWARE' => 'nginx/1.1.19', 'REMOTE_ADDR' => '192.168.56.1', 'REMOTE_PORT' => '46281', 'SERVER_ADDR' => '192.168.56.3', 'SERVER_PORT' => '80', 'SERVER_NAME' => 'localhost', 'HTTPS' => '', 'REDIRECT_STATUS' => '200', 'HTTP_HOST' => 'lemp.test', 'HTTP_USER_AGENT' => 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:20.0) Gecko/20100101 Firefox/20.0', 'HTTP_ACCEPT' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_LANGUAGE' => 'en-US,en;q=0.5', 'HTTP_ACCEPT_ENCODING' => 'gzip, deflate', 'HTTP_CONNECTION' => 'keep-alive', 'PHP_SELF' => '/test.php/foo/bar.php', 'REQUEST_TIME' => 1367750028, ) ------------------------------------------------------------ > > 3. Is there something wrong on the latest ubuntu precise? > > Or is it just my imagination that I have it working before? :) > Were you perhaps previously using an older nginx version where it > worked as expected? > Or is the "try_files" line a new addition since Friday? Most probably, I forget to reload the server after I changed the config (combining try_files with fastcgi_split_path_info), try it in the browser thinking all is well. Thank you all for helping this nginx newbie. > f > -- > Francis Daly francis at daoine.org -- Zakaria Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,238860#msg-238860 From nginx-forum at nginx.us Sun May 5 15:43:08 2013 From: nginx-forum at nginx.us (thmarx) Date: Sun, 05 May 2013 11:43:08 -0400 Subject: Upstream timeout Message-ID: <10b3ca476f2d1c8123ed86340099263c.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using nginx 1.4.0 as loadbalancer for 2 jetty servers. At my windows development/test system I have simple added the following to the default config: upstream backend { server 127.0.0.1:10000; server 127.0.0.1:10001; } server { server_name backend.myserver1.de; location / { proxy_pass http://backend; } } backend.myserver1.de is just an entry in my local host config. When I start everything it seems to work properly but after some requests, max 10, the nginx does not work anymore. no more requests are being processed. When I restart the nginx server, it works for the next few requests. When I call the backendservers directy in the browser the requests are processed correctly. I haven't tested in on a linux machine, maybe it will work there. But for testeing it would be nice, if it would work on my windows system too. Any hints? Kind regards Thorsten Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238864,238864#msg-238864 From nginx-forum at nginx.us Sun May 5 15:45:14 2013 From: nginx-forum at nginx.us (thmarx) Date: Sun, 05 May 2013 11:45:14 -0400 Subject: Upstream timeout In-Reply-To: <10b3ca476f2d1c8123ed86340099263c.NginxMailingListEnglish@forum.nginx.org> References: <10b3ca476f2d1c8123ed86340099263c.NginxMailingListEnglish@forum.nginx.org> Message-ID: just forgot one point, the nginx error log is full of lines like this: 2013/05/05 12:34:25 [error] 5388#4772: *35 upstream timed out (10060: FormatMessage() error:(15105)) while connecting to upstream, client: 127.0.0.1, server: backend.myserver1.de, request: "GET /index.html HTTP/1.1", upstream: "http://127.0.0.1:10001/index.html", host: "backend.myserver1.de" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238864,238865#msg-238865 From nginx-forum at nginx.us Sun May 5 19:03:05 2013 From: nginx-forum at nginx.us (locojohn) Date: Sun, 05 May 2013 15:03:05 -0400 Subject: location vs. rewite In-Reply-To: References: <51598403.9010800@citrin.ru> Message-ID: Yes, location is better and even faster in general. Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238002,238866#msg-238866 From nginx-forum at nginx.us Sun May 5 19:05:07 2013 From: nginx-forum at nginx.us (nano) Date: Sun, 05 May 2013 15:05:07 -0400 Subject: Nginx accept set-cookie but hide it from the client? Message-ID: <416f3c5591dc89cfd78fa6a36c085148.NginxMailingListEnglish@forum.nginx.org> Hello, I have a reverse proxy setup on a website and I'm proxying logged in pages. Everything works except there is a vulnerability in my setup. I login to the site and I can cache the pages. I share these pages with everyone else. However there is a problem with how the set-cookie is passed onto the user when I just want nginx to keep it. Is there a way to make nginx stay logged into the site, and hide the set-cookie passed onto the client? I've tried: proxy_hide_header Set-Cookie; but that just logs out the session and can no longer access the protected pages. When the set-cookie is passed onto the user they can save that cookie and load it up into their browser and be able to login and "hack" the account. Is there a way to keep nginx logged in, without exposing the set-cookie? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238867,238867#msg-238867 From contact at jpluscplusm.com Sun May 5 19:42:33 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 5 May 2013 20:42:33 +0100 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: <416f3c5591dc89cfd78fa6a36c085148.NginxMailingListEnglish@forum.nginx.org> References: <416f3c5591dc89cfd78fa6a36c085148.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 5 May 2013 20:05, nano wrote: > Hello, > > I have a reverse proxy setup on a website and I'm proxying logged in pages. > Everything works except there is a vulnerability in my setup. > > I login to the site and I can cache the pages. I share these pages with > everyone else. > > However there is a problem with how the set-cookie is passed onto the user > when I just want nginx to keep it. > > Is there a way to make nginx stay logged into the site, and hide the > set-cookie passed onto the client? I don't think you've fully thought this through. To help you realise what you've missed, please think this through and answer: What mechanism do you expect your application to use, in order to know that a request comes from authenticated client A and not unauthenticated client B, and hence access to a certain protected page should be granted? > I've tried: proxy_hide_header Set-Cookie; > > but that just logs out the session and can no longer access the protected > pages. When the set-cookie is passed onto the user they can save that cookie > and load it up into their browser and be able to login and "hack" the > account. I really don't understand what hacking you think might be going on here. An authenticated user geting access to the protected resources that their account /should/ allow them to? What is /wrong/ here? > Is there a way to keep nginx logged in, without exposing the set-cookie? In general, cookies (should) render pages uncacheable, except if you're caching them per-user. Which is nasty. What you're describing is, as far as I can see, a lossy process, leading to information being dropped at the nginx->client communication stage, and will not work. Of course, if you're mucking around with someone *else's* site, and only have one login for it which you wish to share amongst multiple front-end users, you could use proxy_set_header Cookie "hard-coded logged-in user's cookie" .. but that's pretty horrible; both technically and morally. Don't do that. Regards, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Sun May 5 20:00:55 2013 From: nginx-forum at nginx.us (nano) Date: Sun, 05 May 2013 16:00:55 -0400 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: References: Message-ID: Thank you for the reply Jonathan. My intentions are not malicious. The site in question is http://turkopticon.differenceengines.com/ and to read reports on that site one has to be logged in. The site is incredibly slow and I had an idea to cache the review data so reports on "bad requesters" (mturk requesters) will be easily available for access. However using my account to proxy reviews and cache them, has resulted in someone changing my password. Nothing was lost, but to cache pages and make them available for everyone I need a way to hide the Set-Cookie session from everyone or else it exposes my account. The site isn't really "private" but the reviews are password protected to encourage user registration. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238867,238871#msg-238871 From contact at jpluscplusm.com Sun May 5 20:11:14 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 5 May 2013 21:11:14 +0100 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: References: Message-ID: On 5 May 2013 21:00, nano wrote: > Thank you for the reply Jonathan. > > My intentions are not malicious. The site in question is > http://turkopticon.differenceengines.com/ and to read reports on that site > one has to be logged in. The site is incredibly slow and I had an idea to > cache the review data so reports on "bad requesters" (mturk requesters) will > be easily available for access. > > However using my account to proxy reviews and cache them, has resulted in > someone changing my password. Nothing was lost, but to cache pages and make > them available for everyone I need a way to hide the Set-Cookie session from > everyone or else it exposes my account. I don't understand. Do you control the back-end application that is consuming the cookies, or is it someone else's site? > The site isn't really "private" but the reviews are password protected to > encourage user registration. What you are asking people on this list to help you with appears to subvert this website's wishes, and leads me to suspect that you don't control it. Whatever your intentions are, malicious or otherwise, until you can confirm that you're merely proxying your own application I'm not going to be able to help you. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Sun May 5 20:20:27 2013 From: nginx-forum at nginx.us (nano) Date: Sun, 05 May 2013 16:20:27 -0400 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: References: Message-ID: <9acc87f4ddfcfe617f751b497fe7924c.NginxMailingListEnglish@forum.nginx.org> Thank you again for the reply Jonathan, I'm sorry. This is not my application I am just trying to "mirror" it. Without losing hope for caching, is there a way I can cache the pages and only show the data to logged in clients? What would I have to do to make sure the user is logged in on the site before showing them a cached result? Does commenting out proxy_ignore_headers Set-Cookie; See if the client is logged in? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238867,238873#msg-238873 From mdounin at mdounin.ru Sun May 5 20:32:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2013 00:32:46 +0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <51859507.1090408@ohlste.in> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> <20130503220111.GB69760@mdounin.ru> <51859507.1090408@ohlste.in> Message-ID: <20130505203246.GF69760@mdounin.ru> Hello! On Sat, May 04, 2013 at 07:08:55PM -0400, Jim Ohlstein wrote: [...] > I have just seen a similar situation using fastcgi cache. In my case > I am using the same cache (but only one cache) for several > server/location blocks. The system is a fairly basic nginx set up > with four upstream fastcgi servers and ip hash. The returned content > is cached locally by nginx. The cache is rather large but I wouldn't > think this would be the cause. [...] > fastcgi_cache_path /var/nginx/fcgi_cache levels=1:2 > keys_zone=one:512m max_size=250g inactive=24h; [...] > The other sever/location blocks are pretty much identical insofar as > fastcgi and cache are concerned. > > When I upgraded nginx using the "on the fly" binary upgrade method, > I saw almost 400,000 lines in the error log that looked like this: > > 2013/05/04 17:54:25 [crit] 65304#0: unlink() > "/var/nginx/fcgi_cache/7/2e/899bc269a74afe6e0ad574eacde4e2e7" failed > (2: No such file or directory) [...] After binary upgrade there are two cache zones - one in old nginx, and another one in new nginx (much like in originally posted configuration). This may cause such errors if e.g. a cache file is removed by old nginx, and new nginx fails to remove the file shortly after. The 400k lines is a bit too many though. You may want to check that the cache wasn't just removed by some (package?) script during the upgrade process. Alternatively, it might indicate that you let old and new processes to coexist for a long time. On the other hand, as discussed many times - such errors are more or less harmless as soon as it's clear what caused cache files to be removed. At worst they indicate that information in a cache zone isn't correct and max_size might not be maintained properly, and eventually nginx will self-heal the cache zone. It probably should be logged at [error] or even [warn] level instead. -- Maxim Dounin http://nginx.org/en/donation.html From contact at jpluscplusm.com Sun May 5 20:40:45 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 5 May 2013 21:40:45 +0100 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: <9acc87f4ddfcfe617f751b497fe7924c.NginxMailingListEnglish@forum.nginx.org> References: <9acc87f4ddfcfe617f751b497fe7924c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I can't help you any further. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Sun May 5 20:44:08 2013 From: nginx-forum at nginx.us (nano) Date: Sun, 05 May 2013 16:44:08 -0400 Subject: Nginx accept set-cookie but hide it from the client? In-Reply-To: References: Message-ID: <63dc46b3ac49f8c5658db5820d1fb1b0.NginxMailingListEnglish@forum.nginx.org> I appreciate your responses Jonathan. Thank you for replying! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238867,238877#msg-238877 From mdounin at mdounin.ru Sun May 5 21:50:12 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2013 01:50:12 +0400 Subject: status 009 on post action handler after client connection abort In-Reply-To: References: Message-ID: <20130505215012.GH69760@mdounin.ru> Hello! On Sat, May 04, 2013 at 05:55:22PM -0300, Jader H. Silva wrote: > Hello, i've notice nginx set $status as 009 on client connection aborted in > a reverse proxy configuration. > Is this the correct behavior? Shouldn't it be 499? As I already explained more than once, post_action have nuances. -- Maxim Dounin http://nginx.org/en/donation.html From hems.inlet at gmail.com Sun May 5 23:14:37 2013 From: hems.inlet at gmail.com (henrique matias) Date: Mon, 6 May 2013 00:14:37 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: <20130505081631.GR27406@craic.sysops.org> References: <20130504214834.GO27406@craic.sysops.org> <20130505081631.GR27406@craic.sysops.org> Message-ID: Hello Francis, thanks a lot for your help and words. Sorry if at some point i didn't make something clear. Starting from scratch, 1. My backend already works, http://my_ip/#{language_code}/anything will bring "anything" translated to the specified language. 2. Changing the line 40 on my config: http://pastebin.com/bZZA30zC from proxy_pass http://app_server; to proxy_pass http://app_server/de/; brings me the error: Starting nginx: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/nginx.conf:121 nginx: configuration file /etc/nginx/nginx.conf test failed Regarding: http://nginx.org/r/map : Sounds like the way to go in order to map domian/subsomains to values ( : Regarding: http://nginx.org/r/proxy_pass I tried using the "unix:socket" format, in my case: proxy_pass http://unix:/tmp/.sock:/it/ as the example in the page you sent, and got the same error. Thanks a lot for your tips, it definitely helped me to understand it better. So far the best solution i can see is using map in order to map the addresses to "language codes", and then execute the rewrite. peace On 5 May 2013 09:16, Francis Daly wrote: > On Sun, May 05, 2013 at 02:38:17AM +0100, henrique matias wrote: > > Hi there, > > > Also i tried adding the address to the try_files: > > > > try_files $uri $uri @app/de/; and try_files $uri $uri @app/de; > > > > but that didn't work either. > > Unless you also added new named locations like "@app/de/", I'd expect > that to return HTTP 500. > > That's a more specific problem report than "didn't work". > > > The core basic of my problem is "Rewrite the URL based on the "server > > name"". > > > > So far the only options i see on nginx are: > > > > 1. Have a configuration with one "if" and one "rewrite" in order to map > > server name to path > > http://nginx.org/r/map > > Set a (possibly empty) variable called (say) $path_prefix, and then use > that in your rewrite or proxy_pass line. > > Note that using a variable in proxy_pass has other requirements too: > > http://nginx.org/r/proxy_pass > > > 2. Multiple server declarations sharing the same configuration ( probably > > using some sort of include? ) > > Can work, if the shared configuration is truly the same. > > > What you reckon? Any suggestion ? > > I still think that until you demonstrate that your back-end works with > this, the rest is not useful. > > Test one thing at a time, and keep the configuration simple. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Sun May 5 23:23:34 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 06 May 2013 11:23:34 +1200 Subject: Converting subdomain to path component without redirect ? In-Reply-To: References: <20130504214834.GO27406@craic.sysops.org> <20130505081631.GR27406@craic.sysops.org> Message-ID: <1367796214.25673.1613.camel@steve-new> On Mon, 2013-05-06 at 00:14 +0100, henrique matias wrote: [snip] > > brings me the error: Starting nginx: nginx: [emerg] "proxy_pass" > cannot have URI part in location given by regular expression, or > inside named location, or inside "if" statement, or inside > "limit_except" block in /etc/nginx/nginx.conf:121 > nginx: configuration file /etc/nginx/nginx.conf test failed ...so take it out of the named location?? Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Skype: sholdowa From nginx-forum at nginx.us Mon May 6 08:42:09 2013 From: nginx-forum at nginx.us (selphon) Date: Mon, 06 May 2013 04:42:09 -0400 Subject: Why local ports increased so much when update from 1.0.15 to 1.2.8 Message-ID: <0d58ee76b9af361c9822ad2a8ade1755.NginxMailingListEnglish@forum.nginx.org> Recently, I updated nginx from 1.0.15 to 1.2.8, and find that the ports(shown by ss -s) increase much as below: nginx/1.0.15 Total: 21696 (kernel 22773) TCP: 111474 (estab 21422, closed 86149, orphaned 3803, synrecv 0, timewait 86145/0), ports 1417 nginx/1.2.8 Total: 21579 (kernel 22349) TCP: 57466 (estab 21295, closed 32654, orphaned 3438, synrecv 0, timewait 32652/0), ports 11239 I updated nginx in order to use the ngx_http_upstream_keepalive module, most ports are used by nginx to connect with squid(port:8081): (nginx/1.2.8) netstat -anp | awk '$5 ~ ":8081"' | grep -i time_wait | wc -l 10227 (nginx/1.0.15) netstat -anp | awk '$5 ~ ":8081"' | grep -i time_wait | wc -l 15 Is there any change that make nginx make lots of connections with squid and occupy so much ports? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238885,238885#msg-238885 From jim at ohlste.in Mon May 6 13:01:45 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 06 May 2013 09:01:45 -0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <20130505203246.GF69760@mdounin.ru> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> <20130503220111.GB69760@mdounin.ru> <51859507.1090408@ohlste.in> <20130505203246.GF69760@mdounin.ru> Message-ID: <5187A9B9.1030001@ohlste.in> On 05/05/13 16:32, Maxim Dounin wrote: > Hello! > > On Sat, May 04, 2013 at 07:08:55PM -0400, Jim Ohlstein wrote: > > [...] > >> I have just seen a similar situation using fastcgi cache. In my case >> I am using the same cache (but only one cache) for several >> server/location blocks. The system is a fairly basic nginx set up >> with four upstream fastcgi servers and ip hash. The returned content >> is cached locally by nginx. The cache is rather large but I wouldn't >> think this would be the cause. > > [...] > >> fastcgi_cache_path /var/nginx/fcgi_cache levels=1:2 >> keys_zone=one:512m max_size=250g inactive=24h; > > [...] > >> The other sever/location blocks are pretty much identical insofar as >> fastcgi and cache are concerned. >> >> When I upgraded nginx using the "on the fly" binary upgrade method, >> I saw almost 400,000 lines in the error log that looked like this: >> >> 2013/05/04 17:54:25 [crit] 65304#0: unlink() >> "/var/nginx/fcgi_cache/7/2e/899bc269a74afe6e0ad574eacde4e2e7" failed >> (2: No such file or directory) > > [...] > > After binary upgrade there are two cache zones - one in old nginx, > and another one in new nginx (much like in originally posted > configuration). This may cause such errors if e.g. a cache file > is removed by old nginx, and new nginx fails to remove the file > shortly after. > > The 400k lines is a bit too many though. You may want to check > that the cache wasn't just removed by some (package?) script > during the upgrade process. Alternatively, it might indicate that > you let old and new processes to coexist for a long time. I hadn't considered that there are two zones during that short time. Thanks for pointing that out. To my knowledge, there are no scripts or packages which remove files from the cache, or the entire cache. A couple of minutes after this occurred there were a bit under 1.4 million items in the cache and it was "full" at 250 GB. I did look in a few sub-directories at the time, and most of the items were time stamped from before this started so clearly the entire cache was not removed. During the time period these entries were made in the error log, and in the two minutes after, access log entries show the expected ratio of "HIT" and "MISS" entries which further supports your point below that these are harmless (although I don't really believe that I have a cause). I'm not sure what you mean by a "long time" but all of these entries are time stamped over over roughly two and a half minutes. > > On the other hand, as discussed many times - such errors are more > or less harmless as soon as it's clear what caused cache files to > be removed. At worst they indicate that information in a cache > zone isn't correct and max_size might not be maintained properly, > and eventually nginx will self-heal the cache zone. It probably > should be logged at [error] or even [warn] level instead. > Why would max_size not be maintained properly? Isn't that the responsibility cache manager process? Are there known issues/bugs? Thank you for your response and assistance. -- Jim Ohlstein From nginx-forum at nginx.us Mon May 6 13:31:14 2013 From: nginx-forum at nginx.us (nevernet) Date: Mon, 06 May 2013 09:31:14 -0400 Subject: how to disable nginx internal dns cache? In-Reply-To: <20130504130350.GL27406@craic.sysops.org> References: <20130504130350.GL27406@craic.sysops.org> Message-ID: <3084de9462856b91e931f2063d5ff849.NginxMailingListEnglish@forum.nginx.org> i have resolver define, see below configuration: server { listen 80; server_name xx.com; access_log /var/log/nginx/xx-nginx.access.log; error_log /var/log/nginx/xx-nginx_error.log debug; # resolver 8.8.8.8; # resolver_timeout 1s; #set your default location location / { # resolver 8.8.8.8 valid=5s; # i have defined resolver at here ,and also tried to add it in "http" section in nginx.conf file. but both of them doesnt work. proxy_pass http://p2.domain.com; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238835,238893#msg-238893 From nginx-forum at nginx.us Mon May 6 13:46:00 2013 From: nginx-forum at nginx.us (mevans336) Date: Mon, 06 May 2013 09:46:00 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline Message-ID: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> Hello, Each night we take our backend servers offline at specific times for maintenance. When the application servers restart they immediately begin answering HTTP requests from Nginx, but we want to keep them out of the upstream pool for about 30 minutes while they cache information from our data providers. To do this, I created iptables rules in cron on the application servers to block all communication from our Nginx reverse proxies and then delete the rule after 30 minutes. However, Nginx still seems to think the server that is blocking it via iptables is online, adds it back to the upstream pool, then times it out and takes it back out. This causes our alerting system to go haywire throwing HTTP Read Timeouts and our clients to be unable to connect to our application. Our upstream block is simple: upstream app_servers { ip_hash; server 192.168.1.12:8080 max_fails=3 fail_timeout=30s; server 192.168.1.13:8080 max_fails=3 fail_timeout=30s; } We're running Nginx 1.4. Any ideas on why this would happen and ways we can avoid it? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238894#msg-238894 From luky-37 at hotmail.com Mon May 6 13:52:56 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 6 May 2013 15:52:56 +0200 Subject: how to disable nginx internal dns cache? In-Reply-To: <3084de9462856b91e931f2063d5ff849.NginxMailingListEnglish@forum.nginx.org> References: <20130504130350.GL27406@craic.sysops.org>, <3084de9462856b91e931f2063d5ff849.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi! > how to disable nginx internal dns cache?? Use an IP address instead of a hostname in the proxy_pass variable. > both of them doesnt work. Can you elaborate what "it doesn't work" mean? Lukas From mdounin at mdounin.ru Mon May 6 13:54:19 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2013 17:54:19 +0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <5187A9B9.1030001@ohlste.in> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> <20130503220111.GB69760@mdounin.ru> <51859507.1090408@ohlste.in> <20130505203246.GF69760@mdounin.ru> <5187A9B9.1030001@ohlste.in> Message-ID: <20130506135419.GM69760@mdounin.ru> Hello! On Mon, May 06, 2013 at 09:01:45AM -0400, Jim Ohlstein wrote: > On 05/05/13 16:32, Maxim Dounin wrote: > >Hello! > > > >On Sat, May 04, 2013 at 07:08:55PM -0400, Jim Ohlstein wrote: > > > >[...] > > > >>I have just seen a similar situation using fastcgi cache. In my case > >>I am using the same cache (but only one cache) for several > >>server/location blocks. The system is a fairly basic nginx set up > >>with four upstream fastcgi servers and ip hash. The returned content > >>is cached locally by nginx. The cache is rather large but I wouldn't > >>think this would be the cause. > > > >[...] > > > >> fastcgi_cache_path /var/nginx/fcgi_cache levels=1:2 > >>keys_zone=one:512m max_size=250g inactive=24h; > > > >[...] > > > >>The other sever/location blocks are pretty much identical insofar as > >>fastcgi and cache are concerned. > >> > >>When I upgraded nginx using the "on the fly" binary upgrade method, > >>I saw almost 400,000 lines in the error log that looked like this: > >> > >>2013/05/04 17:54:25 [crit] 65304#0: unlink() > >>"/var/nginx/fcgi_cache/7/2e/899bc269a74afe6e0ad574eacde4e2e7" failed > >>(2: No such file or directory) > > > >[...] > > > >After binary upgrade there are two cache zones - one in old nginx, > >and another one in new nginx (much like in originally posted > >configuration). This may cause such errors if e.g. a cache file > >is removed by old nginx, and new nginx fails to remove the file > >shortly after. > > > >The 400k lines is a bit too many though. You may want to check > >that the cache wasn't just removed by some (package?) script > >during the upgrade process. Alternatively, it might indicate that > >you let old and new processes to coexist for a long time. > > I hadn't considered that there are two zones during that short time. > Thanks for pointing that out. > > To my knowledge, there are no scripts or packages which remove files > from the cache, or the entire cache. A couple of minutes after this > occurred there were a bit under 1.4 million items in the cache and > it was "full" at 250 GB. I did look in a few sub-directories at the > time, and most of the items were time stamped from before this > started so clearly the entire cache was not removed. During the time > period these entries were made in the error log, and in the two > minutes after, access log entries show the expected ratio of "HIT" > and "MISS" entries which further supports your point below that > these are harmless (although I don't really believe that I have a > cause). > > I'm not sure what you mean by a "long time" but all of these entries > are time stamped over over roughly two and a half minutes. Is it ok in your setup that 400k cache items are removed/expired from cache in two minutes? If yes, then it's probably ok. > >On the other hand, as discussed many times - such errors are more > >or less harmless as soon as it's clear what caused cache files to > >be removed. At worst they indicate that information in a cache > >zone isn't correct and max_size might not be maintained properly, > >and eventually nginx will self-heal the cache zone. It probably > >should be logged at [error] or even [warn] level instead. > > > > Why would max_size not be maintained properly? Isn't that the > responsibility cache manager process? Are there known issues/bugs? Cache manager process uses the same shared memory zone to maintain max_size. And if nginx thinks a cache file is here, but the file was in fact already deleted (this is why alerts in question appear) - total size of the cache as recorded in the shared memory will be incorrect. As a result cache manager will delete some extra files to keep (incorrect) size under max_size. In a worst case cache size will be again correct after inactive= time passes after cache files were deleted. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 6 13:59:01 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 06 May 2013 09:59:01 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> References: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> Message-ID: ehlo, one question: do you shutdown all your app-servers or server-by-server, so you still have a available application? there ist the "down" option for you upstream-block to disable servers, even if they are up, but using this in a dynamic process might get very frickling. whet do you use for iptables-rules? drop/reset? i'd debug your server/app-ports when the iptables-script enforces no connections, from my belly i wouldnt expect nginx to be the faulty chain link. what does your log tells you when your appservers come up again and the iptables-block is enforced? regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238898#msg-238898 From mdounin at mdounin.ru Mon May 6 13:59:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2013 17:59:10 +0400 Subject: how to disable nginx internal dns cache? In-Reply-To: <3084de9462856b91e931f2063d5ff849.NginxMailingListEnglish@forum.nginx.org> References: <20130504130350.GL27406@craic.sysops.org> <3084de9462856b91e931f2063d5ff849.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130506135910.GN69760@mdounin.ru> Hello! On Mon, May 06, 2013 at 09:31:14AM -0400, nevernet wrote: > i have resolver define, see below configuration: > > server { > listen 80; > server_name xx.com; > access_log /var/log/nginx/xx-nginx.access.log; > error_log /var/log/nginx/xx-nginx_error.log debug; > > # resolver 8.8.8.8; > # resolver_timeout 1s; > > #set your default location > location / { > # resolver 8.8.8.8 valid=5s; # i have defined resolver at here ,and also > tried to add it in "http" section in nginx.conf file. but both of them > doesnt work. > proxy_pass http://p2.domain.com; > } Resolver directive is only used if you use variables in proxy_pass. Use something like this to force resolver usage: location / { set $backend "p2.domain.com"; proxy_pass http://$backend; } By default nginx resolvers hostnames to ip addresses while loading configuration, and will not re-resolve them unless you'll reload configuration (see http://nginx.org/en/docs/control.html). -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 6 14:01:44 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 06 May 2013 10:01:44 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: References: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> Message-ID: mex Wrote: ------------------------------------------------------- > ehlo, > > > one question: do you shutdown all your app-servers or > server-by-server, so you still have a available application? my bad, please read: do you shutdown all your app-servers at once or server-after-server, so you still have a available application? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238900#msg-238900 From jim at ohlste.in Mon May 6 14:21:29 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 06 May 2013 10:21:29 -0400 Subject: [crit] 16665#0 unlink() In-Reply-To: <20130506135419.GM69760@mdounin.ru> References: <92f5199b7254d20f3d7ebfdd6e4c249d.NginxMailingListEnglish@forum.nginx.org> <20130503220111.GB69760@mdounin.ru> <51859507.1090408@ohlste.in> <20130505203246.GF69760@mdounin.ru> <5187A9B9.1030001@ohlste.in> <20130506135419.GM69760@mdounin.ru> Message-ID: <5187BC69.8000607@ohlste.in> On 05/06/13 09:54, Maxim Dounin wrote: > Hello! > > On Mon, May 06, 2013 at 09:01:45AM -0400, Jim Ohlstein wrote: > >> On 05/05/13 16:32, Maxim Dounin wrote: >>> Hello! >>> >>> On Sat, May 04, 2013 at 07:08:55PM -0400, Jim Ohlstein wrote: >>> >>> [...] >>> >>>> I have just seen a similar situation using fastcgi cache. In my case >>>> I am using the same cache (but only one cache) for several >>>> server/location blocks. The system is a fairly basic nginx set up >>>> with four upstream fastcgi servers and ip hash. The returned content >>>> is cached locally by nginx. The cache is rather large but I wouldn't >>>> think this would be the cause. >>> >>> [...] >>> >>>> fastcgi_cache_path /var/nginx/fcgi_cache levels=1:2 >>>> keys_zone=one:512m max_size=250g inactive=24h; >>> >>> [...] >>> >>>> The other sever/location blocks are pretty much identical insofar as >>>> fastcgi and cache are concerned. >>>> >>>> When I upgraded nginx using the "on the fly" binary upgrade method, >>>> I saw almost 400,000 lines in the error log that looked like this: >>>> >>>> 2013/05/04 17:54:25 [crit] 65304#0: unlink() >>>> "/var/nginx/fcgi_cache/7/2e/899bc269a74afe6e0ad574eacde4e2e7" failed >>>> (2: No such file or directory) >>> >>> [...] >>> >>> After binary upgrade there are two cache zones - one in old nginx, >>> and another one in new nginx (much like in originally posted >>> configuration). This may cause such errors if e.g. a cache file >>> is removed by old nginx, and new nginx fails to remove the file >>> shortly after. >>> >>> The 400k lines is a bit too many though. You may want to check >>> that the cache wasn't just removed by some (package?) script >>> during the upgrade process. Alternatively, it might indicate that >>> you let old and new processes to coexist for a long time. >> >> I hadn't considered that there are two zones during that short time. >> Thanks for pointing that out. >> >> To my knowledge, there are no scripts or packages which remove files >> from the cache, or the entire cache. A couple of minutes after this >> occurred there were a bit under 1.4 million items in the cache and >> it was "full" at 250 GB. I did look in a few sub-directories at the >> time, and most of the items were time stamped from before this >> started so clearly the entire cache was not removed. During the time >> period these entries were made in the error log, and in the two >> minutes after, access log entries show the expected ratio of "HIT" >> and "MISS" entries which further supports your point below that >> these are harmless (although I don't really believe that I have a >> cause). >> >> I'm not sure what you mean by a "long time" but all of these entries >> are time stamped over over roughly two and a half minutes. > > Is it ok in your setup that 400k cache items are removed/expired > from cache in two minutes? If yes, then it's probably ok. No, that is way more than expected. The box handles an average of 300-500 requests/second during peak hours, spiking around 800-900. so that would be at most around 150,000 requests in three minutes. Even if 150,000 requests were all cache-able and were all cache misses (resulting in them all expiring at the same time in the future) that could not explain all of those items. FWIW, this upgrade was done on a weekend. Peak times are "business hours" in Europe and North America. The box was relatively slow at that time. > >>> On the other hand, as discussed many times - such errors are more >>> or less harmless as soon as it's clear what caused cache files to >>> be removed. At worst they indicate that information in a cache >>> zone isn't correct and max_size might not be maintained properly, >>> and eventually nginx will self-heal the cache zone. It probably >>> should be logged at [error] or even [warn] level instead. >>> >> >> Why would max_size not be maintained properly? Isn't that the >> responsibility cache manager process? Are there known issues/bugs? > > Cache manager process uses the same shared memory zone to maintain > max_size. And if nginx thinks a cache file is here, but the file > was in fact already deleted (this is why alerts in question > appear) - total size of the cache as recorded in the shared memory > will be incorrect. As a result cache manager will delete some > extra files to keep (incorrect) size under max_size. > > In a worst case cache size will be again correct after inactive= > time passes after cache files were deleted. > OK, that makes sense and is what I would expect. I'm still troubled by how many items there were in discrepancy. I will watch and see what happens the next time I upgrade. I'll look at how many items are in the cache directory before and after, as well as the total size, which was on the mark after the upgrade this time, but perhaps not before. -- Jim Ohlstein From hems.inlet at gmail.com Mon May 6 15:35:09 2013 From: hems.inlet at gmail.com (henrique matias) Date: Mon, 6 May 2013 16:35:09 +0100 Subject: Converting subdomain to path component without redirect ? In-Reply-To: <1367796214.25673.1613.camel@steve-new> References: <20130504214834.GO27406@craic.sysops.org> <20130505081631.GR27406@craic.sysops.org> <1367796214.25673.1613.camel@steve-new> Message-ID: Hello Steve, yeah i took out ( : Everything is working fine! now am starting to dig the cache module, so my backend doesnt get called too many times in a short period of time.. thanks a lot guys for all your help ( : On Monday, 6 May 2013, Steve Holdoway wrote: > On Mon, 2013-05-06 at 00:14 +0100, henrique matias wrote: > [snip] > > > > brings me the error: Starting nginx: nginx: [emerg] "proxy_pass" > > cannot have URI part in location given by regular expression, or > > inside named location, or inside "if" statement, or inside > > "limit_except" block in /etc/nginx/nginx.conf:121 > > nginx: configuration file /etc/nginx/nginx.conf test failed > > ...so take it out of the named location?? > > Steve > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 6 16:12:44 2013 From: nginx-forum at nginx.us (mevans336) Date: Mon, 06 May 2013 12:12:44 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: References: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1b653a3f9d9975d9ece887c32c9ac1c2.NginxMailingListEnglish@forum.nginx.org> Hi Mex, We shut them down one-by-one, 45 minutes apart. The issue only seems to occur when the first server listed is blocked however. We don't see the read timeouts if I leave the iptables rules enabled on the second server. I think that may be a false symptom related to ip_hash binding clients to the first server. Here are the iptables rules: Drop rule: iptables -I INPUT -s 192.168.1.0/24 -j DROP Allow rule: iptables -D INPUT -s 192.168.1.0/24 -j DROP I also thought about trying to add "down" to the servers in the upstream block, but as you said that would be rather complex to script. The only error I see is a 499 error in the Nginx logs, followed by a 200: ip.address - - [06/May/2013:01:50:53 -0400] "GET /home HTTP/1.1" 499 0 "-" "Mozilla 4.0" ip.address - - [06/May/2013:01:52:04 -0400] "GET /home HTTP/1.1" 200 24781 "-" "Mozilla/5.0 (compatible; PRTG Network Monitor (www.paessler.com); Windows)" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238906#msg-238906 From nginx-forum at nginx.us Mon May 6 16:15:47 2013 From: nginx-forum at nginx.us (mevans336) Date: Mon, 06 May 2013 12:15:47 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: <1b653a3f9d9975d9ece887c32c9ac1c2.NginxMailingListEnglish@forum.nginx.org> References: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> <1b653a3f9d9975d9ece887c32c9ac1c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7dabcb163c2cbd4c7b47f6e365fed2d2.NginxMailingListEnglish@forum.nginx.org> Oops, here is the relevant error.log entry from Nginx as well: 013/05/06 01:46:03 [error] 2063#0: *294659 upstream timed out (110: Connection timed out) while connecting to upstream, client: ip.address, server: amywebsite.com, request: "GET /home HTTP/1.1", upstream: "http://192.168.1.12:8080/home", host: "www.mywebsite.com" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238907#msg-238907 From cloos at jhcloos.com Mon May 6 16:19:00 2013 From: cloos at jhcloos.com (James Cloos) Date: Mon, 06 May 2013 12:19:00 -0400 Subject: Nginx 1.4 problem In-Reply-To: (Lukas Tribus's message of "Fri, 3 May 2013 10:00:35 +0200") References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: >>>>> "LT" == Lukas Tribus writes: >> Although [::]:80 ipv6only=off; does work as advertized (including for >> localhost sockets), [::1]:80 ipv6only=off; fails to respond to v4 >> connections. LT> Which is expected, since ::1 is an ipv6 address. No it is not expected. Everything else sees conenctions to anything in 127.0.0.0/8 when listen(2)ing to ::1 with bindv6only off. -JimC -- James Cloos OpenPGP: 1024D/ED7DAEA6 From miguelmclara at gmail.com Mon May 6 16:32:31 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Mon, 6 May 2013 17:32:31 +0100 Subject: Nginx 1.4 problem In-Reply-To: References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: If this is a linux box you could simple use [::]:80 and it should, by default, responde to both v4 and v6... "In Linux by default any IPv6 TCP socket also accepts IPv4 traffic using the IPv4 to IPv6 mapped address format, i.e., ::ffff:. E.g., ::ffff:192.168.0.27 maps the IPv4 address 192.168.0.27 to an IPv6 address. When you enable the address [::]:80, binding port 80 using IPv6, in the listen directive, in Linux, by default, the IPv4 port 80 is also enabled. Meaning that nginx listens for both IPv4 and IPv6 incoming traffic. Therefore if you erroneously specify also a IPv4 address you'll get an already bind address error when reloading nginx configuration." ( http://wiki.nginx.org/HttpCoreModule) " So I guess ipv6only=off should do the same... and it should work.... I can be 100% sure since I don't have any box with nginx 1.4 + ipv6 yet! On Mon, May 6, 2013 at 5:19 PM, James Cloos wrote: > >>>>> "LT" == Lukas Tribus writes: > > >> Although [::]:80 ipv6only=off; does work as advertized (including for > >> localhost sockets), [::1]:80 ipv6only=off; fails to respond to v4 > >> connections. > > LT> Which is expected, since ::1 is an ipv6 address. > > No it is not expected. > > Everything else sees conenctions to anything in 127.0.0.0/8 when > listen(2)ing to ::1 with bindv6only off. > > -JimC > -- > James Cloos OpenPGP: 1024D/ED7DAEA6 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 6 16:36:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2013 20:36:09 +0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: <1b653a3f9d9975d9ece887c32c9ac1c2.NginxMailingListEnglish@forum.nginx.org> References: <584dd8ebd130f16562ee668be3df4dea.NginxMailingListEnglish@forum.nginx.org> <1b653a3f9d9975d9ece887c32c9ac1c2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130506163608.GR69760@mdounin.ru> Hello! On Mon, May 06, 2013 at 12:12:44PM -0400, mevans336 wrote: > Hi Mex, > > We shut them down one-by-one, 45 minutes apart. The issue only seems to > occur when the first server listed is blocked however. We don't see the read > timeouts if I leave the iptables rules enabled on the second server. I think > that may be a false symptom related to ip_hash binding clients to the first > server. Timeouts are expected to appear in logs once per fail_timeout= specified (after fail_timeout expires, nginx will route one request to a server in question to check if it's alive again). As only certain ips are mapped to the server blocked with ip_hash, it might nontrivial to test things with low traffic. > Here are the iptables rules: > > Drop rule: iptables -I INPUT -s 192.168.1.0/24 -j DROP > Allow rule: iptables -D INPUT -s 192.168.1.0/24 -j DROP Using "-j REJECT" would make things a lot faster. [...] -- Maxim Dounin http://nginx.org/en/donation.html From al-nginx at none.at Mon May 6 16:40:30 2013 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 06 May 2013 18:40:30 +0200 Subject: Q: about "best" way for remove www in hostname in nginxish Message-ID: Dear readers, after reading http://nginx.org/en/docs/http/server_names.html#regex_names and googleing https://www.google.at/search?q=nginx+remove+www+subdomain I have a 'best solution' question. I have the following customer request. The 'normal User' type almost every time a www.subdomain.domain.at into they browser, which does not exist but the subdomain.domain.at exists. I would now add the follwing into my nginx.conf. ### server { server_name ~^(www\.)?(?.+)$; return http://$domain/; } ### Is this the cleanest way in nginxish? Thanks for help Aleks From mike503 at gmail.com Mon May 6 16:49:01 2013 From: mike503 at gmail.com (Michael Shadle) Date: Mon, 6 May 2013 09:49:01 -0700 Subject: Q: about "best" way for remove www in hostname in nginxish In-Reply-To: References: Message-ID: I just do server { listen 80; server_name was.foo.com; rewrite ^ http://foo.com$uri permanent; } On May 6, 2013, at 9:40 AM, Aleksandar Lazic wrote: > Dear readers, > > after reading > > http://nginx.org/en/docs/http/server_names.html#regex_names > > and googleing > > https://www.google.at/search?q=nginx+remove+www+subdomain > > I have a 'best solution' question. > > I have the following customer request. > > The 'normal User' type almost every time a www.subdomain.domain.at into they browser, > which does not exist but the subdomain.domain.at exists. > > I would now add the follwing into my nginx.conf. > > ### > server { > server_name ~^(www\.)?(?.+)$; > > return http://$domain/; > } > ### > > Is this the cleanest way in nginxish? > > Thanks for help > Aleks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From contact at jpluscplusm.com Mon May 6 16:50:07 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 6 May 2013 17:50:07 +0100 Subject: Q: about "best" way for remove www in hostname in nginxish In-Reply-To: References: Message-ID: That's how I do it, except that I hard-code a list of the domains I want to redirect. Otherwise, I'd have built a service that could be trivially used by anyone else for their own domain. Wildcards are bad, mmkay? ;-) IMHO and YMMV, Jonathan From luky-37 at hotmail.com Mon May 6 17:14:13 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 6 May 2013 19:14:13 +0200 Subject: Nginx 1.4 problem In-Reply-To: References: , <20130501174009.GA10443@mdounin.ru>, , , <20130501210759.GC10443@mdounin.ru>, , , , , Message-ID: Hi Jim, > Everything else sees conenctions to anything in 127.0.0.0/8 Not sure what you mean by everything else, but I don't think thats the case. See this example: > lukas at ubuntuvm:~$ grep Listen /etc/ssh/sshd_config > ListenAddress ::1 > ListenAddress 10.0.0.55 > lukas at ubuntuvm:~$ sudo netstat -tulpen | egrep "Internet|Proto|sshd" > Active Internet connections (only servers) > Proto Recv-Q Send-Q Local Address?????????? Foreign Address???????? State?????? User?????? Inode?????? PID/Program name > tcp??????? 0????? 0 10.0.0.55:22??????????? 0.0.0.0:*?????????????? LISTEN????? 0????????? 18735?????? 8610/sshd > tcp6?????? 0????? 0 ::1:22????????????????? :::*??????????????????? LISTEN????? 0????????? 18737?????? 8610/sshd > lukas at ubuntuvm:~$ > lukas at ubuntuvm:~$ > lukas at ubuntuvm:~$ telnet 127.0.0.1 22 > Trying 127.0.0.1... > telnet: Unable to connect to remote host: Connection refused > lukas at ubuntuvm:~$ telnet ::1 22 > Trying ::1... > Connected to ::1. > Escape character is '^]'. > SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 > ^] > telnet> quit > Connection closed. > lukas at ubuntuvm:~$ From nginx-forum at nginx.us Mon May 6 17:47:13 2013 From: nginx-forum at nginx.us (mevans336) Date: Mon, 06 May 2013 13:47:13 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: <20130506163608.GR69760@mdounin.ru> References: <20130506163608.GR69760@mdounin.ru> Message-ID: <98dd02cd55a04f9ec9b81204483eaa4f.NginxMailingListEnglish@forum.nginx.org> I didn't even think about rejecting the traffic rather than dropping it! Great idea! Would that allow the client connection (Browser to Nginx) to fail over to the backend server that is up rather than simply timing out? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238913#msg-238913 From g.plumb at gmail.com Mon May 6 17:47:30 2013 From: g.plumb at gmail.com (Gee) Date: Mon, 6 May 2013 18:47:30 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: Hi I am having trouble getting Mono to work with nginx. I installed my OS (OpenBSD 5.3) and set up ports. I built mono, mono-xsp and nginx - all without incident. All three appear to be working OK, but not in conjunction. I am trying to run the default MVC3 web app, but keep getting a 502 (Bad gateway). In the log, I see the following: [crit] 31764#0: *1 connect() to unix:/tmp/fastcgi.socket failed (2: No such file or directory) while connecting to upstream, The frustrating thing here is that /tmp/fastcgi.socket does actually exist. I tried 'touch' and making sure 'wheel' has the appropriate permissions. The result of 'ls -la /tmp/fastcgi.socket' revealed nothing awry. Does anyone have any ideas/hints? To try and save time, here is my config: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 80; access_log /home/www/nginx.log; error_log /home/www/errors.log; # root /home/www/test; # index index.html index.htm index.aspx default.aspx; location ^~ /Scripts/ { } location ^~ /Content/ { } location / { root /home/www/test; # fastcgi_index /Home/Index; fastcgi_pass unix:/tmp/fastcgi.socket; # include fastcgi_params; include /etc/nginx/fastcgi_params; } } } Thanks G -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloos at jhcloos.com Mon May 6 17:58:40 2013 From: cloos at jhcloos.com (James Cloos) Date: Mon, 06 May 2013 13:58:40 -0400 Subject: Nginx 1.4 problem In-Reply-To: (Lukas Tribus's message of "Mon, 6 May 2013 19:14:13 +0200") References: <20130501174009.GA10443@mdounin.ru> <20130501210759.GC10443@mdounin.ru> Message-ID: >>>>> "LT" == Lukas Tribus writes: LT> Hi Jim, >> Everything else sees conenctions to anything in 127.0.0.0/8 LT> Not sure what you mean by everything else, but I don't think LT> thats the case. Some time ago a message on debian-devel noted that deb was going to start defaulting to bindv6only=1. What is your /proc/sys/net/ipv6/bindv6only? While looking into this, I found that, when given ::1, nc(1) explicitly listens to both ::1 and ::ffff:127.0.0.1. (Although it has a bug which causes it to close all v4 connections right after accept(2)ing them. :) My expectation was that it would work, but it seems some applications force it to work by Doing What I Mean behind my back. :) Evidently I'm going to have to write a test app to confirm things. Probably later today. -JimC -- James Cloos OpenPGP: 1024D/ED7DAEA6 From rkearsley at blueyonder.co.uk Mon May 6 18:20:27 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Mon, 06 May 2013 19:20:27 +0100 Subject: Mono + nginx (OpenBSD 5.3) In-Reply-To: References: Message-ID: <5187F46B.9040302@blueyonder.co.uk> On 06/05/13 18:47, Gee wrote: > > > The frustrating thing here is that /tmp/fastcgi.socket does actually > exist. I tried 'touch' and making sure 'wheel' has the appropriate > permissions. The result of 'ls -la /tmp/fastcgi.socket' revealed > nothing awry. > > Does anyone have any ideas/hints? > > To try and save time, here is my config: > see if you can connect to the unix/fcgi socket like this: http://www.ralf-lang.de/2011/11/22/using-socat-to-debug-unix-sockets-like-telnet-for-tcp/ other than that.. I'm out of ideas :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon May 6 18:26:37 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 6 May 2013 19:26:37 +0100 Subject: Mono + nginx (OpenBSD 5.3) In-Reply-To: References: Message-ID: On 6 May 2013 18:47, Gee wrote: > Hi > > I am having trouble getting Mono to work with nginx. I installed my OS > (OpenBSD 5.3) and set up ports. I built mono, mono-xsp and nginx - all > without incident. All three appear to be working OK, but not in > conjunction. > > I am trying to run the default MVC3 web app, but keep getting a 502 (Bad > gateway). In the log, I see the following: > > [crit] 31764#0: *1 connect() to unix:/tmp/fastcgi.socket failed (2: No such > file or directory) while connecting to upstream, I vaguely recall seeing this when running some Mono stuff behind a local nginx proxy. I moved off the project before seeing how people fixed it, but ISTR that it was a permissions issue. As a *test*(!), chmod 777 /tmp/fastcgi.socket *and*restart*everything*. That's obviously not a fix, but it'll at least show you if you're heading in the right direction. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From reallfqq-nginx at yahoo.fr Mon May 6 18:30:40 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 6 May 2013 14:30:40 -0400 Subject: Q: about "best" way for remove www in hostname in nginxish In-Reply-To: References: Message-ID: I do: if ($host ~* "^www\.(.*)$") { set "$domain" "$1"; rewrite (.*) http://$domain$uri permanent; } --- *B. R.* On Mon, May 6, 2013 at 12:50 PM, Jonathan Matthews wrote: > That's how I do it, except that I hard-code a list of the domains I > want to redirect. Otherwise, I'd have built a service that could be > trivially used by anyone else for their own domain. Wildcards are bad, > mmkay? ;-) > > IMHO and YMMV, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon May 6 18:32:19 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 6 May 2013 20:32:19 +0200 Subject: Nginx 1.4 problem In-Reply-To: References: , <20130501174009.GA10443@mdounin.ru>, , , <20130501210759.GC10443@mdounin.ru>, , , , , , , Message-ID: Even when explicitly setting the socket option IPV6_V6ONLY to 0 (man 7 ipv6) - and thus ignoring ?cat /proc/sys/net/ipv6/bindv6only? this doesn't work. > While looking into this, I found that, when given ::1, nc(1) > explicitly listens to both ::1 and ::ffff:127.0.0.1. The behavior here is exactly the opposite. Perhaps you saw this with an older/buggy kernel or netcat release? # terminal 1 > lukas at ubuntuvm:~$ nc -vv -l ::1 8080 > Connection from 0.0.0.0 port 8080 [tcp/http-alt] accepted > > asd > lukas at ubuntuvm:~$ # terminal 2 > lukas at ubuntuvm:~$ sudo netstat -tulpen | grep nc > tcp6?????? 0????? 0 ::1:8080??????????????? :::*??????????????????? LISTEN????? 1000?????? 19801?????? 9002/nc > lukas at ubuntuvm:~$ > lukas at ubuntuvm:~$ telnet 127.0.0.1 8080 > Trying 127.0.0.1... > telnet: Unable to connect to remote host: Connection refused > lukas at ubuntuvm:~$ telnet ::1 8080 > Trying ::1... > Connected to ::1. > Escape character is '^]'. > > asd > ^] > telnet> quit > Connection closed. > lukas at ubuntuvm:~$ Regards, Lukas From nginx-forum at nginx.us Mon May 6 18:41:27 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 06 May 2013 14:41:27 -0400 Subject: Upstream Read Timeout Upon Backend Server Offline In-Reply-To: <98dd02cd55a04f9ec9b81204483eaa4f.NginxMailingListEnglish@forum.nginx.org> References: <20130506163608.GR69760@mdounin.ru> <98dd02cd55a04f9ec9b81204483eaa4f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3b2316aa16d2c86775b2985be812e04a.NginxMailingListEnglish@forum.nginx.org> if you REJECT from iptables you tell the client immediatly that the service/port is not available, otherwise you run into timeouts, yes. i'm not quite sure, but max_fails=3 x fail_timeout=30s == 90 seconds, until your nginx fails over to the other server. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238894,238918#msg-238918 From g.plumb at gmail.com Mon May 6 19:16:17 2013 From: g.plumb at gmail.com (Gee) Date: Mon, 6 May 2013 20:16:17 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: Richard - I ran netcat, but no debug output appeared when running nginx/mono. I'm not sure if this is a sign of anything or not though? :-( Jonathan - I tried your chmod suggestion - but still no joy :-( Any other ideas? *fingers crossed* Thanks G -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 6 19:31:34 2013 From: nginx-forum at nginx.us (alexander_koch_log) Date: Mon, 06 May 2013 15:31:34 -0400 Subject: module example with rbtree Message-ID: <4a906ab3a3f4fcfd69e3bef4af59c3f2.NginxMailingListEnglish@forum.nginx.org> Hi, Beside the file cache and ssl sharing sessions, does anyone knows of a 3rd party module which uses rbtree to store information? Thanks, Alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238920,238920#msg-238920 From al-nginx at none.at Mon May 6 20:19:58 2013 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 06 May 2013 22:19:58 +0200 Subject: Q: about "best" way for remove www in hostname in nginxish In-Reply-To: References: Message-ID: I remember this and now also where I have seen it ;-) http://wiki.nginx.org/Pitfalls#Server_Name Thanks for all feedback. I will try the Solution written in first post. Br aleks Am 06-05-2013 20:30, schrieb B.R.: > I do: > if ($host ~* "^www.(.*)$") { > set "$domain" "$1"; > rewrite (.*) http://$domain$uri permanent; > } > > --- > B. R. > > On Mon, May 6, 2013 at 12:50 PM, Jonathan Matthews wrote: > >> That's how I do it, except that I hard-code a list of the domains I >> want to redirect. Otherwise, I'd have built a service that could be >> trivially used by anyone else for their own domain. Wildcards are bad, >> mmkay? ;-) >> >> IMHO and YMMV, >> Jonathan >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx [1] > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 6 20:29:20 2013 From: nginx-forum at nginx.us (juice-qr) Date: Mon, 06 May 2013 16:29:20 -0400 Subject: cookie based proxy redirect Message-ID: <9997afc718c5fbad7fbd5629038bde04.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using nginx in a reverse proxy setup with 2 apache 2.4 backends, I've configured a cookie based proxy_pass setup as shown in http://dgtool.blogspot.com/2013/02/nginx-as-sticky-balancer-for-ha-using.html , however php header redirects seem to die (or the browser doesn't like) when nginx goes to redirect them internally with the '502 bad gateway' which should forward it to the upstream pool. All other urls work fine, but when a request comes in with a header based redirect, something is getting stuck on the 502 bad gateway, the browser just stops with a blank page. Not sure if I can add something to nginx to fix... Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238923,238923#msg-238923 From nginx-forum at nginx.us Tue May 7 01:15:35 2013 From: nginx-forum at nginx.us (juice-qr) Date: Mon, 06 May 2013 21:15:35 -0400 Subject: cookie based proxy redirect In-Reply-To: <9997afc718c5fbad7fbd5629038bde04.NginxMailingListEnglish@forum.nginx.org> References: <9997afc718c5fbad7fbd5629038bde04.NginxMailingListEnglish@forum.nginx.org> Message-ID: <43ba047d3a5c1d141cc65918d9b12432.NginxMailingListEnglish@forum.nginx.org> Configs : http://pastebin.com/QDpjDSBY http://pastebin.com/SyQTmq8v Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238923,238926#msg-238926 From reallfqq-nginx at yahoo.fr Tue May 7 02:24:52 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 6 May 2013 22:24:52 -0400 Subject: Q: about "best" way for remove www in hostname in nginxish In-Reply-To: References: Message-ID: You're right, my solution sucks... but it has the benefit of being generic. I'll look into it. Glad you found a solution, though. ;o) --- *B. R.* On Mon, May 6, 2013 at 4:19 PM, Aleksandar Lazic wrote: > ** > > I remember this and now also where I have seen it ;-) > > http://wiki.nginx.org/Pitfalls#Server_Name > > Thanks for all feedback. > > I will try the Solution written in first post. > > Br aleks > > Am 06-05-2013 20:30, schrieb B.R.: > > I do: > if ($host ~* "^www\.(.*)$") { > set "$domain" "$1"; > rewrite (.*) http://$domain$uri permanent; > } > --- > *B. R.* > > > On Mon, May 6, 2013 at 12:50 PM, Jonathan Matthews < > contact at jpluscplusm.com> wrote: > >> That's how I do it, except that I hard-code a list of the domains I >> want to redirect. Otherwise, I'd have built a service that could be >> trivially used by anyone else for their own domain. Wildcards are bad, >> mmkay? ;-) >> >> IMHO and YMMV, >> Jonathan >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 7 07:40:02 2013 From: nginx-forum at nginx.us (alexander_koch_log) Date: Tue, 07 May 2013 03:40:02 -0400 Subject: NGX_HTTP_CACHE and upstream Message-ID: Hi, When compiling nginx with NGX_HTTP_CACHE and using upstream backends - are responses automatically cached in memory without the use of proxy_cache? Thanks, Alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238929,238929#msg-238929 From nginx-forum at nginx.us Tue May 7 08:51:15 2013 From: nginx-forum at nginx.us (nevernet) Date: Tue, 07 May 2013 04:51:15 -0400 Subject: how to disable nginx internal dns cache? In-Reply-To: <20130506135910.GN69760@mdounin.ru> References: <20130506135910.GN69760@mdounin.ru> Message-ID: <045571295e7178fbdbd2753050a059b2.NginxMailingListEnglish@forum.nginx.org> Hi, Maxim thanks for you replies. i have already checked out the problem. Yes ,the nginx will re-resolve the domains unless reload the configuration. this is my problem. because i am using Dynamic IP address. thank you. Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238835,238931#msg-238931 From mdounin at mdounin.ru Tue May 7 09:24:06 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 13:24:06 +0400 Subject: NGX_HTTP_CACHE and upstream In-Reply-To: References: Message-ID: <20130507092405.GV69760@mdounin.ru> Hello! On Tue, May 07, 2013 at 03:40:02AM -0400, alexander_koch_log wrote: > When compiling nginx with NGX_HTTP_CACHE and using upstream backends - are > responses automatically cached in memory without the use of proxy_cache? No. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue May 7 09:45:50 2013 From: nginx-forum at nginx.us (Krupa) Date: Tue, 07 May 2013 05:45:50 -0400 Subject: proxy pass encoding % problem Message-ID: I am using nginx as a reverse proxy. My requirement is nginx needs to pass the url passed by client as-is to the proxy server. I have set the proxy_pass to hostname:port without the uri part As per the docs ,If it is necessary to transmit URI in the unprocessed form then directive proxy_pass should be used without URI part Despite this,nginx does some sort of processing on the uri.The percentages get escaped and are replaced by %25. Example: Client url - http://10.10.10.10:90/this%2Bthat/ proxy_pass - http://10.10.2.50:8080 Uri sent by nginx after processing - /this%25bthat/ Additionaly,when client passes latin characters like %EA,proxy_pass further encodes to some random value which cannot be processed by the client. Note that changing charset isn't of much help here. Any insights,approaches to circumvent this is highly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238933,238933#msg-238933 From mdounin at mdounin.ru Tue May 7 10:37:42 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 14:37:42 +0400 Subject: proxy pass encoding % problem In-Reply-To: References: Message-ID: <20130507103741.GX69760@mdounin.ru> Hello! On Tue, May 07, 2013 at 05:45:50AM -0400, Krupa wrote: > I am using nginx as a reverse proxy. My requirement is nginx needs to pass > the url passed by client as-is to the proxy server. > I have set the proxy_pass to hostname:port without the uri part > As per the docs ,If it is necessary to transmit URI in the unprocessed form > then directive proxy_pass should be used without URI part > > Despite this,nginx does some sort of processing on the uri.The percentages > get escaped and are replaced by %25. > > Example: Client url - http://10.10.10.10:90/this%2Bthat/ > > proxy_pass - http://10.10.2.50:8080 > > Uri sent by nginx after processing - /this%25bthat/ > > Additionaly,when client passes latin characters like %EA,proxy_pass further > encodes to some random value which cannot be processed by the client. > Note that changing charset isn't of much help here. > > Any insights,approaches to circumvent this is highly appreciated. Symptoms you describe suggests there are rewrites in you config which mess up things. Note that proxy_pass even without URI will have to re-encode URI if you change it with rewrites. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue May 7 10:48:11 2013 From: nginx-forum at nginx.us (Krupa) Date: Tue, 07 May 2013 06:48:11 -0400 Subject: proxy pass encoding % problem In-Reply-To: <20130507103741.GX69760@mdounin.ru> References: <20130507103741.GX69760@mdounin.ru> Message-ID: <3651d78cd7e63a16b8503cc54ac1f5dd.NginxMailingListEnglish@forum.nginx.org> There are no rewrite policies. location /gw_proxy { #internal; #resolver 8.8.8.8; proxy_http_version 1.1; proxy_pass http://50.112.76.185:9001; proxy_pass_request_body off; proxy_set_header Content-Length 0; } invoke nginx with this url : http://54.245.39.250:8081/gw_proxy/test%2btest/ uri sent by nginx : /gw_proxy/test%252btest/ local res = ngx.location.capture("/gw_proxy" .. ngx.var.request_uri, { method = ngx_methods[gw_method], body = "", vars = { } }); This is my configuration. Please advise. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238933,238935#msg-238935 From christian.boenning at gmail.com Tue May 7 10:50:12 2013 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Tue, 7 May 2013 12:50:12 +0200 Subject: Mainline Ubuntu Packages Message-ID: Hi, I think about moving from custom compiled versions to Mainline Ubuntu Packages provided at nginx.org/packages. The issue in question currently is which parameters for ./configure are used to build those packages? is there any documentation regarding which modules are built in? Or is there any repository for that stuff you use to build those packages to have a look? best regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Tue May 7 11:07:37 2013 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 07 May 2013 15:07:37 +0400 Subject: Mainline Ubuntu Packages In-Reply-To: References: Message-ID: <5188E079.806@citrin.ru> On 05/07/13 14:50, Christian B?nning wrote: > The issue in question currently is which parameters for ./configure are used to > build those packages? run nginx -V From christian.boenning at gmail.com Tue May 7 11:13:27 2013 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Tue, 7 May 2013 13:13:27 +0200 Subject: Mainline Ubuntu Packages In-Reply-To: <5188E079.806@citrin.ru> References: <5188E079.806@citrin.ru> Message-ID: Thanks. But I would just like to know that before I actually start to deploy something on our development/testing/staging/production environments ... ;) Best regards, Christian 2013/5/7 Anton Yuzhaninov > On 05/07/13 14:50, Christian B?nning wrote: > >> The issue in question currently is which parameters for ./configure are >> used to >> build those packages? >> > > run > nginx -V > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 7 11:29:09 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 15:29:09 +0400 Subject: nginx-1.5.0 Message-ID: <20130507112909.GZ69760@mdounin.ru> Changes with nginx 1.5.0 07 May 2013 *) Security: a stack-based buffer overflow might occur in a worker process while handling a specially crafted request, potentially resulting in arbitrary code execution (CVE-2013-2028); the bug had appeared in 1.3.9. Thanks to Greg MacManus, iSIGHT Partners Labs. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue May 7 11:29:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 15:29:46 +0400 Subject: nginx-1.4.1 Message-ID: <20130507112946.GD69760@mdounin.ru> Changes with nginx 1.4.1 07 May 2013 *) Security: a stack-based buffer overflow might occur in a worker process while handling a specially crafted request, potentially resulting in arbitrary code execution (CVE-2013-2028); the bug had appeared in 1.3.9. Thanks to Greg MacManus, iSIGHT Partners Labs. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue May 7 11:30:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 15:30:21 +0400 Subject: nginx security advisory (CVE-2013-2028) Message-ID: <20130507113021.GH69760@mdounin.ru> Hello! Greg MacManus, of iSIGHT Partners Labs, found a security problem in several recent versions of nginx. A stack-based buffer overflow might occur in a worker process while handling a specially crafted request, potentially resulting in arbitrary code execution (CVE-2013-2028). The problem affects nginx 1.3.9 - 1.4.0. The problem is fixed in nginx 1.5.0, 1.4.1. Patch for the problem can be found here: http://nginx.org/download/patch.2013.chunked.txt As a temporary workaround the following configuration can be used in each server{} block: if ($http_transfer_encoding ~* chunked) { return 444; } -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue May 7 12:24:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2013 16:24:47 +0400 Subject: proxy pass encoding % problem In-Reply-To: <3651d78cd7e63a16b8503cc54ac1f5dd.NginxMailingListEnglish@forum.nginx.org> References: <20130507103741.GX69760@mdounin.ru> <3651d78cd7e63a16b8503cc54ac1f5dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130507122447.GM69760@mdounin.ru> Hello! On Tue, May 07, 2013 at 06:48:11AM -0400, Krupa wrote: > There are no rewrite policies. > location /gw_proxy { > #internal; > #resolver 8.8.8.8; > proxy_http_version 1.1; > proxy_pass http://50.112.76.185:9001; > proxy_pass_request_body off; > proxy_set_header Content-Length 0; > } > > invoke nginx with this url : > http://54.245.39.250:8081/gw_proxy/test%2btest/ > uri sent by nginx : /gw_proxy/test%252btest/ Works fine here. Quote from debug log: ... 2013/05/07 16:12:53 [debug] 36830#0: *1 http request line: "GET /gw_proxy/test%2btest/ HTTP/1.1" ... 2013/05/07 16:12:53 [debug] 36830#0: *1 test location: "/gw_proxy" 2013/05/07 16:12:53 [debug] 36830#0: *1 using configuration "/gw_proxy" ... 2013/05/07 16:12:53 [debug] 36830#0: *1 http proxy header: "GET /gw_proxy/test%2btest/ HTTP/1.1 Content-Length: 0 ... That is, request line sent to a backend is identical to one got from a client. You may want to check if you have problems on client and/or backend instead. In any case, looking into nginx debug log might be helpful. -- Maxim Dounin http://nginx.org/en/donation.html From sb at waeme.net Tue May 7 12:25:14 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 7 May 2013 16:25:14 +0400 Subject: Mainline Ubuntu Packages In-Reply-To: References: <5188E079.806@citrin.ru> Message-ID: On 7 May2013, at 15:13 , Christian B?nning wrote: > Thanks. But I would just like to know that before I actually start to deploy something on our development/testing/staging/production environments ... ;) % nginx -V nginx version: nginx/1.5.0 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 on ubuntu 12.04 and 12.10 --with-http_spdy_module is added. From christian.boenning at gmail.com Tue May 7 13:15:52 2013 From: christian.boenning at gmail.com (=?ISO-8859-1?Q?Christian_B=F6nning?=) Date: Tue, 7 May 2013 15:15:52 +0200 Subject: Mainline Ubuntu Packages In-Reply-To: References: <5188E079.806@citrin.ru> Message-ID: Thank you Sergey. That helps. Best regards, Christian 2013/5/7 Sergey Budnevitch > > On 7 May2013, at 15:13 , Christian B?nning > wrote: > > > Thanks. But I would just like to know that before I actually start to > deploy something on our development/testing/staging/production environments > ... ;) > > % nginx -V > nginx version: nginx/1.5.0 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module --with-mail > --with-mail_ssl_module --with-file-aio --with-ipv6 > > on ubuntu 12.04 and 12.10 --with-http_spdy_module is added. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue May 7 14:54:00 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 7 May 2013 10:54:00 -0400 Subject: nginx-1.5.0 In-Reply-To: <20130507112909.GZ69760@mdounin.ru> References: <20130507112909.GZ69760@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.0 for Windows http://goo.gl/h2e8w (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, May 7, 2013 at 7:29 AM, Maxim Dounin wrote: > Changes with nginx 1.5.0 07 May > 2013 > > *) Security: a stack-based buffer overflow might occur in a worker > process while handling a specially crafted request, potentially > resulting in arbitrary code execution (CVE-2013-2028); the bug had > appeared in 1.3.9. > Thanks to Greg MacManus, iSIGHT Partners Labs. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Tue May 7 16:23:55 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 7 May 2013 12:23:55 -0400 Subject: Mainline Ubuntu Packages In-Reply-To: References: <5188E079.806@citrin.ru> Message-ID: > 2013/5/7 Sergey Budnevitch > >> >> On 7 May2013, at 15:13 , Christian B?nning >> wrote: >> >> > Thanks. But I would just like to know that before I actually start to >> deploy something on our development/testing/staging/production environments >> ... ;) >> >> % nginx -V >> nginx version: nginx/1.5.0 >> TLS SNI support enabled >> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log >> --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid >> --lock-path=/var/run/nginx.lock >> --http-client-body-temp-path=/var/cache/nginx/client_temp >> --http-proxy-temp-path=/var/cache/nginx/proxy_temp >> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx >> --with-http_ssl_module --with-http_realip_module >> --with-http_addition_module --with-http_sub_module --with-http_dav_module >> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module >> --with-http_gzip_static_module --with-http_random_index_module >> --with-http_secure_link_module --with-http_stub_status_module --with-mail >> --with-mail_ssl_module --with-file-aio --with-ipv6 >> >> on ubuntu 12.04 and 12.10 --with-http_spdy_module is added. >> >> >> Hello, Thanks for posting the configure script. Is there a reason why the geoip module is not compiled into the packages? Our site uses this module quite extensively and given how old the Debian packages have become, I was hoping we could switch to the official nginx.org packages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue May 7 16:46:56 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 7 May 2013 12:46:56 -0400 Subject: nginx-1.4.1 In-Reply-To: <20130507112946.GD69760@mdounin.ru> References: <20130507112946.GD69760@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.4.1 for Windows http://goo.gl/4kA8O (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ (516) 647-1992 http://twitter.com/kworthington On Tue, May 7, 2013 at 7:29 AM, Maxim Dounin wrote: > Changes with nginx 1.4.1 07 May > 2013 > > *) Security: a stack-based buffer overflow might occur in a worker > process while handling a specially crafted request, potentially > resulting in arbitrary code execution (CVE-2013-2028); the bug had > appeared in 1.3.9. > Thanks to Greg MacManus, iSIGHT Partners Labs. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Tue May 7 17:33:03 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 7 May 2013 21:33:03 +0400 Subject: Mainline Ubuntu Packages In-Reply-To: References: <5188E079.806@citrin.ru> Message-ID: <51A7539C-2E00-48AF-B973-E3103A5F6E2B@waeme.net> On 7 May2013, at 20:23 , Richard Stanway wrote: > Thanks for posting the configure script. You can download srpm and/or debian source package from nginx.org and look at configure options or change/rebuild it for your needs. > Is there a reason why the geoip module is not compiled into the packages? Our site uses this module quite extensively and given how old the Debian packages have become, I was hoping we could switch to the official nginx.org packages. geoip module requires geoip lib, so it will add dependence to nginx package. The policy is to enable all options except those need additional libs. From g.plumb at gmail.com Tue May 7 19:22:15 2013 From: g.plumb at gmail.com (Gee) Date: Tue, 7 May 2013 20:22:15 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: Hi everyone I have made progress (of sorts). After lots of faffing, I got xsp2/4 to work and have logged the following exception: Handling exception type TargetInvocationException Message is Exception has been thrown by the target of an invocation. IsTerminating is set to True System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. Server stack trace: at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in :0 at System.Runtime.Serialization.ObjectRecord.LoadData (System.Runtime.Serialization.ObjectManager manager, ISurrogateSelector selector, StreamingContext context) [0x00000] in :0 at System.Runtime.Serialization.ObjectManager.DoFixups () [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (BinaryElement elem, System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] in :0 at System.Runtime.Remoting.RemotingServices.DeserializeCallData (System.Byte[] array) [0x00000] in :0 at (wrapper xdomain-dispatch) System.AppDomain:DoCallBack (object,byte[]&,byte[]&) Exception rethrown at [0]: ---> System.ArgumentException: Couldn't bind to method 'SetHostingEnvironment'. at System.Delegate.GetCandidateMethod (System.Type type, System.Type target, System.String method, BindingFlags bflags, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method) [0x00000] in :0 at System.DelegateSerializationHolder+DelegateEntry.DeserializeDelegate (System.Runtime.Serialization.SerializationInfo info) [0x00000] in :0 at System.DelegateSerializationHolder..ctor (System.Runtime.Serialization.SerializationInfo info, StreamingContext ctx) [0x00000] in :0 at (wrapper managed-to-native) System.Reflection.MonoCMethod:InternalInvoke (System.Reflection.MonoCMethod,object,object[],System.Exception&) at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 --- End of inner exception stack trace --- at (wrapper xdomain-invoke) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at (wrapper remoting-invoke-with-check) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] in :0 at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] in :0 at Mono.WebServer.XSP.Server.RealMain (System.String[] args, Boolean root, IApplicationHost ext_apphost, Boolean quiet) [0x00000] in :0 at Mono.WebServer.XSP.Server.Main (System.String[] args) [0x00000] in :0 Any ideas? Thanks! G -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.plumb at gmail.com Tue May 7 19:26:23 2013 From: g.plumb at gmail.com (Gee) Date: Tue, 7 May 2013 20:26:23 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: OK, by removing all copies of \bin\System.Web.* from my app's path, I whittled the exception down to this: Handling exception type TypeInitializationException Message is An exception was thrown by the type initializer for System.Web.Configuration.WebConfigurationManager IsTerminating is set to True System.TypeInitializationException: An exception was thrown by the type initializer for System.Web.Configuration.WebConfigurationManager Server stack trace: at System.Web.Hosting.ApplicationHost.SetHostingEnvironment () [0x00000] in :0 at System.AppDomain.DoCallBack (System.CrossAppDomainDelegate callBackDelegate) [0x00000] in :0 at (wrapper remoting-invoke-with-check) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at (wrapper xdomain-dispatch) System.AppDomain:DoCallBack (object,byte[]&,byte[]&) Exception rethrown at [0]: ---> System.MissingMethodException: Method not found: 'System.Configuration.ConfigurationManager.get_ConfigurationFactory'. --- End of inner exception stack trace --- at (wrapper xdomain-invoke) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at (wrapper remoting-invoke-with-check) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] in :0 at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] in :0 at Mono.WebServer.XSP.Server.RealMain (System.String[] args, Boolean root, IApplicationHost ext_apphost, Boolean quiet) [0x00000] in :0 at Mono.WebServer.XSP.Server.Main (System.String[] args) [0x00000] in :0 Thoughts (as I am pretty much out!)? Thanks! G -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.plumb at gmail.com Tue May 7 19:39:10 2013 From: g.plumb at gmail.com (Gee) Date: Tue, 7 May 2013 20:39:10 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: OK, It appears I have been trigger-happy and wandered down a dead-end (my apologies to everyone). I removed System.Configuration.dll from my deployment and XSP ran (and was as happy as it can be). I am now back to the bad gateway problem (502). This tells me that the site's code is not the problem and it is most likely not a Mono issue - which puts the ball back in nginx's path. I tried binding to TCP sockets and kept getting the same problem - so this isn't the issue. Does anyone have a working mono config file(s) they could ping me to look at (as I am pretty much out of ideas now)? Thanks! G -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 7 20:36:17 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 7 May 2013 21:36:17 +0100 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: References: <20130505075806.GP27406@craic.sysops.org> Message-ID: <20130507203617.GT27406@craic.sysops.org> On Sun, May 05, 2013 at 07:04:21AM -0400, zakaria wrote: > Francis Daly Wrote: Hi there, > Thank you for confirm it. > Its nginx bug #321 http://trac.nginx.org/nginx/ticket/321 Ah, good find -- I hadn't spotted that it was a "known issue". > location ~ [^/]\.php(/|$) { Just as another alternative, it is probably possible to use named captures in the "location" regex and avoid using fastcgi_split_path_info at all -- with everything up to and including ".php" being used as the script name, and something like "(/.*)?$" being the path info. But what you have here already looks like it should be working, so can probably be left alone. Cheers, f -- Francis Daly francis at daoine.org From g.plumb at gmail.com Tue May 7 23:08:29 2013 From: g.plumb at gmail.com (Gee) Date: Wed, 8 May 2013 00:08:29 +0100 Subject: Mono + Nginx (OpenBSD 5.3) Message-ID: After some more faffing with XSP4 and Web.config, I was at least able to serve a 'Hello World' via TCP. So this proves that nginx and mono can work together on my setup. So in going back to the unix sockets, I still have the same problem. As already suggested, this does look like a permissions issue - which sounds solvable. Having said that, I can't seem to get past it :-( I have chmod '777' (for now) - to no avail. I ran chmod a+rwx as well - to no avail. I even tried a different path - sadly, to no avail! I am sure I am missing something obvious, although I have no idea what it could be... :-( Any advice/help would be hugely appreciated! Thanks G -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 8 02:35:15 2013 From: nginx-forum at nginx.us (jradd) Date: Tue, 07 May 2013 22:35:15 -0400 Subject: proxy pass encoding % problem In-Reply-To: References: Message-ID: <5cecb7067ef22824a57392f60e79357c.NginxMailingListEnglish@forum.nginx.org> Try issuing the directive; proxy_set_header >proxy_set_header Host $host:$proxy_port; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238933,238976#msg-238976 From nhadie at gmail.com Wed May 8 11:29:17 2013 From: nhadie at gmail.com (ron ramos) Date: Wed, 8 May 2013 19:29:17 +0800 Subject: 104: Connection reset by peer Message-ID: Hi All, I understand that this is a generic error, but it has been frustrating trying to solve this issue and i'm not able to find an answer anywhere. basically i have an application which is running fine using apache, but we wanted to try nginx/php5-fpm: some parts of my application has this connection reset issue, sometimes it works but inconsistent. if it does not work, i restart php5-fpm and it will work again, but after sometime it will have the same issue. i'm using: # nginx -v nginx version: nginx/1.4.0 # php5-fpm -v PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies when i enable debug this is the only thing i can see; [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line 72: received SIGCHLD [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 seconds from start [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), line 421: [pool legacy] child 359398 started [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line 411: event module triggered 1 events i have tried different config changes like using static instead of dynamic..increase max_request..increase child ...increase server..etc. one thing i really need is to identify what is causing that connection peer but logs is not really helping. i tried strace and it still did not show me anything. any other way to debug or identify what is causing this issue? totally clueless right now. thank you in advanced. Regards, Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed May 8 11:42:38 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 May 2013 07:42:38 -0400 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: After a very long search on Google (almost 15s, including keyboard input), I found astonishing help, based on the information you provided. About the FPM children burying, I found a resource on StackOverflow linking back to the Nginx forum (ML archive): http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 It seems, at first glance, that the children bury and its respawn works as intended, if you reach the requests limit number. I dunno how to check that is the case though. Your log entries seem to be silent about that. ?M?y 2 cents, --- *B. R.* On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: > > Hi All, > > I understand that this is a generic error, but it has been frustrating > trying to solve this issue and i'm not able to find an answer anywhere. > basically i have an application which is running fine using apache, but we > wanted to try nginx/php5-fpm: > > some parts of my application has this connection reset issue, sometimes it > works but inconsistent. > if it does not work, i restart php5-fpm and it will work again, but after > sometime it will have the same issue. > > i'm using: > > # nginx -v > nginx version: nginx/1.4.0 > > # php5-fpm -v > PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) > Copyright (c) 1997-2013 The PHP Group > Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies > > when i enable debug this is the only thing i can see; > > [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line > 72: received SIGCHLD > [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), > line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 > seconds from start > [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), > line 421: [pool legacy] child 359398 started > [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line > 411: event module triggered 1 events > > > i have tried different config changes like using static instead of > dynamic..increase max_request..increase child ...increase server..etc. > > one thing i really need is to identify what is causing that connection > peer but logs is not really helping. i tried strace and it still did not > show me anything. > > any other way to debug or identify what is causing this issue? totally > clueless right now. > > thank you in advanced. > > Regards, > Ron > > > > > > > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Wed May 8 11:58:20 2013 From: nhadie at gmail.com (ron ramos) Date: Wed, 8 May 2013 19:58:20 +0800 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: hi, i've seen that info as well ( yes i tried searching for answers as mentioned ) and it did not help me unfortunately. i've increased from 500 to 1000 to 10000. increase children servers etc. regards, ron On Wed, May 8, 2013 at 7:42 PM, B.R. wrote: > After a very long search on Google (almost 15s, including keyboard input), > I found astonishing help, based on the information you provided. > > About the FPM children burying, I found a resource on StackOverflow > linking back to the Nginx forum (ML archive): > > http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 > > It seems, at first glance, that the children bury and its respawn works as > intended, if you reach the requests limit number. I dunno how to check that > is the case though. Your log entries seem to be silent about that. > > My 2 cents, > --- > *B. R.* > > > On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: > >> >> Hi All, >> >> I understand that this is a generic error, but it has been frustrating >> trying to solve this issue and i'm not able to find an answer anywhere. >> basically i have an application which is running fine using apache, but we >> wanted to try nginx/php5-fpm: >> >> some parts of my application has this connection reset issue, sometimes >> it works but inconsistent. >> if it does not work, i restart php5-fpm and it will work again, but after >> sometime it will have the same issue. >> >> i'm using: >> >> # nginx -v >> nginx version: nginx/1.4.0 >> >> # php5-fpm -v >> PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) >> Copyright (c) 1997-2013 The PHP Group >> Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies >> >> when i enable debug this is the only thing i can see; >> >> [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line >> 72: received SIGCHLD >> [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), >> line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 >> seconds from start >> [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), >> line 421: [pool legacy] child 359398 started >> [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line >> 411: event module triggered 1 events >> >> >> i have tried different config changes like using static instead of >> dynamic..increase max_request..increase child ...increase server..etc. >> >> one thing i really need is to identify what is causing that connection >> peer but logs is not really helping. i tried strace and it still did not >> show me anything. >> >> any other way to debug or identify what is causing this issue? totally >> clueless right now. >> >> thank you in advanced. >> >> Regards, >> Ron >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 8 12:06:40 2013 From: nginx-forum at nginx.us (Stylw) Date: Wed, 08 May 2013 08:06:40 -0400 Subject: 403 Forbidden error with Mediawiki Message-ID: I'm having a frustrating experience trying to get nginx to work properly with mediawiki's pretty urls. For some reason my configuration won?t detect that my mediawiki installation is installed in a folder above root (said folder is called /w/) and appears to be throwing out 403 forbidden errors when trying to access my site. The script itself works properly, and I have no problem with accessing the wiki installation my using mydomain.com/w/ - but if I don?t manually input that /w/ folder in the website address nginx will throw a 403 forbidden error at me. I should also mention that root is empty apart from that folder, which is why I suspect nginx is throwing that 403 message at me. I don?t want to flood people?s emails with my giant nginx configuration, so I?ll include links to a pastebin dump which has them. I?ve also removed any personal information which could hint at where my site is hosted and/or what the domain is. Nginx error log: http://pastebin.com/BZGWurYZ Niginx configuration: http://pastebin.com/mFdHbEzm Mediawiki settings: http://pastebin.com/nQuaZBdQ I hope somebody can figure out where the fault is - I've been ripping out my hair over it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239004,239004#msg-239004 From reallfqq-nginx at yahoo.fr Wed May 8 12:11:20 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 May 2013 08:11:20 -0400 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: Do you have some information on the number of concurrent connections? Since you already played with the 'max_requests' parameter too, it seems not to be the reason of the trouble. But better be safe than sorry. If your actual number of connections/second is greater than what the configuration is expecting then you'll have your answer. But you don't provide information allowing to decide on this, and it seems you tried blind changes only. Could you provide your input requests rate? --- *B. R.* On Wed, May 8, 2013 at 7:58 AM, ron ramos wrote: > hi, > > i've seen that info as well ( yes i tried searching for answers as > mentioned ) and it did not help me unfortunately. > i've increased from 500 to 1000 to 10000. increase children servers etc. > > regards, > ron > > > On Wed, May 8, 2013 at 7:42 PM, B.R. wrote: > >> After a very long search on Google (almost 15s, including keyboard >> input), I found astonishing help, based on the information you provided. >> >> About the FPM children burying, I found a resource on StackOverflow >> linking back to the Nginx forum (ML archive): >> >> http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 >> >> It seems, at first glance, that the children bury and its respawn works >> as intended, if you reach the requests limit number. I dunno how to check >> that is the case though. Your log entries seem to be silent about that. >> >> My 2 cents, >> --- >> *B. R.* >> >> >> On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: >> >>> >>> Hi All, >>> >>> I understand that this is a generic error, but it has been frustrating >>> trying to solve this issue and i'm not able to find an answer anywhere. >>> basically i have an application which is running fine using apache, but we >>> wanted to try nginx/php5-fpm: >>> >>> some parts of my application has this connection reset issue, sometimes >>> it works but inconsistent. >>> if it does not work, i restart php5-fpm and it will work again, but >>> after sometime it will have the same issue. >>> >>> i'm using: >>> >>> # nginx -v >>> nginx version: nginx/1.4.0 >>> >>> # php5-fpm -v >>> PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) >>> Copyright (c) 1997-2013 The PHP Group >>> Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies >>> >>> when i enable debug this is the only thing i can see; >>> >>> [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line >>> 72: received SIGCHLD >>> [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), >>> line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 >>> seconds from start >>> [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), >>> line 421: [pool legacy] child 359398 started >>> [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line >>> 411: event module triggered 1 events >>> >>> >>> i have tried different config changes like using static instead of >>> dynamic..increase max_request..increase child ...increase server..etc. >>> >>> one thing i really need is to identify what is causing that connection >>> peer but logs is not really helping. i tried strace and it still did not >>> show me anything. >>> >>> any other way to debug or identify what is causing this issue? totally >>> clueless right now. >>> >>> thank you in advanced. >>> >>> Regards, >>> Ron >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Wed May 8 12:29:19 2013 From: nhadie at gmail.com (ron ramos) Date: Wed, 8 May 2013 20:29:19 +0800 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: Hi All, my apologies, but i think the issue is that php is crashing. was not looking at the syslog: [2617248.127349] php5-fpm[504727] general protection ip:6787a9 sp:7fff22c5c6f0 error:0 in php5-fpm[400000+700000] gdb backtrace shows #0 0x00000000006787a9 in ?? () #1 0x0000000000678940 in ?? () #2 0x00000000006ad82e in zend_hash_destroy () #3 0x000000000069e4ab in _zval_dtor_func () #4 0x000000000069031a in _zval_ptr_dtor () #5 0x00000000006ad808 in zend_hash_destroy () #6 0x000000000069e4ab in _zval_dtor_func () #7 0x000000000069031a in _zval_ptr_dtor () #8 0x00007fe9a25c056f in apc_free_class_entry_after_execution (src=0x215e228) at /tmp/pear/temp/APC/apc_compile.c:1992 #9 0x00007fe9a25c3ad6 in apc_deactivate () at /tmp/pear/temp/APC/apc_main.c:948 #10 apc_request_shutdown () at /tmp/pear/temp/APC/apc_main.c:1042 #11 0x00007fe9a25b85b5 in zm_deactivate_apc (type=, module_number=) at /tmp/pear/temp/APC/php_apc.c:407 #12 0x00000000006a6d94 in ?? () #13 0x000000000063efd5 in php_request_shutdown () #14 0x000000000042d5b9 in ?? () #15 0x00007fe9a2dbe76d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 #16 0x000000000042e231 in _start () apc seems to be dying. will try to check what's wrong. again my apologies on disturbing the list. regards, Ron On Wed, May 8, 2013 at 8:11 PM, B.R. wrote: > Do you have some information on the number of concurrent connections? > Since you already played with the 'max_requests' parameter too, it seems > not to be the reason of the trouble. But better be safe than sorry. > > If your actual number of connections/second is greater than what the > configuration is expecting then you'll have your answer. But you don't > provide information allowing to decide on this, and it seems you tried > blind changes only. > Could you provide your input requests rate? > --- > *B. R.* > > > On Wed, May 8, 2013 at 7:58 AM, ron ramos wrote: > >> hi, >> >> i've seen that info as well ( yes i tried searching for answers as >> mentioned ) and it did not help me unfortunately. >> i've increased from 500 to 1000 to 10000. increase children servers etc. >> >> regards, >> ron >> >> >> On Wed, May 8, 2013 at 7:42 PM, B.R. wrote: >> >>> After a very long search on Google (almost 15s, including keyboard >>> input), I found astonishing help, based on the information you provided. >>> >>> About the FPM children burying, I found a resource on StackOverflow >>> linking back to the Nginx forum (ML archive): >>> >>> http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 >>> >>> It seems, at first glance, that the children bury and its respawn works >>> as intended, if you reach the requests limit number. I dunno how to check >>> that is the case though. Your log entries seem to be silent about that. >>> >>> My 2 cents, >>> --- >>> *B. R.* >>> >>> >>> On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: >>> >>>> >>>> Hi All, >>>> >>>> I understand that this is a generic error, but it has been frustrating >>>> trying to solve this issue and i'm not able to find an answer anywhere. >>>> basically i have an application which is running fine using apache, but we >>>> wanted to try nginx/php5-fpm: >>>> >>>> some parts of my application has this connection reset issue, sometimes >>>> it works but inconsistent. >>>> if it does not work, i restart php5-fpm and it will work again, but >>>> after sometime it will have the same issue. >>>> >>>> i'm using: >>>> >>>> # nginx -v >>>> nginx version: nginx/1.4.0 >>>> >>>> # php5-fpm -v >>>> PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) >>>> Copyright (c) 1997-2013 The PHP Group >>>> Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies >>>> >>>> when i enable debug this is the only thing i can see; >>>> >>>> [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line >>>> 72: received SIGCHLD >>>> [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), >>>> line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 >>>> seconds from start >>>> [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), >>>> line 421: [pool legacy] child 359398 started >>>> [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line >>>> 411: event module triggered 1 events >>>> >>>> >>>> i have tried different config changes like using static instead of >>>> dynamic..increase max_request..increase child ...increase server..etc. >>>> >>>> one thing i really need is to identify what is causing that connection >>>> peer but logs is not really helping. i tried strace and it still did not >>>> show me anything. >>>> >>>> any other way to debug or identify what is causing this issue? totally >>>> clueless right now. >>>> >>>> thank you in advanced. >>>> >>>> Regards, >>>> Ron >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Wed May 8 12:31:13 2013 From: nhadie at gmail.com (ron ramos) Date: Wed, 8 May 2013 20:31:13 +0800 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: Hi B.R. To answer your question i'm only sending max 10 connection to NGINX on my load balancer. and even if i took it out of the load balancer and there is only me accessing the server, it still happens. thanks! Regards, Ron On Wed, May 8, 2013 at 8:11 PM, B.R. wrote: > Do you have some information on the number of concurrent connections? > Since you already played with the 'max_requests' parameter too, it seems > not to be the reason of the trouble. But better be safe than sorry. > > If your actual number of connections/second is greater than what the > configuration is expecting then you'll have your answer. But you don't > provide information allowing to decide on this, and it seems you tried > blind changes only. > Could you provide your input requests rate? > --- > *B. R.* > > > On Wed, May 8, 2013 at 7:58 AM, ron ramos wrote: > >> hi, >> >> i've seen that info as well ( yes i tried searching for answers as >> mentioned ) and it did not help me unfortunately. >> i've increased from 500 to 1000 to 10000. increase children servers etc. >> >> regards, >> ron >> >> >> On Wed, May 8, 2013 at 7:42 PM, B.R. wrote: >> >>> After a very long search on Google (almost 15s, including keyboard >>> input), I found astonishing help, based on the information you provided. >>> >>> About the FPM children burying, I found a resource on StackOverflow >>> linking back to the Nginx forum (ML archive): >>> >>> http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 >>> >>> It seems, at first glance, that the children bury and its respawn works >>> as intended, if you reach the requests limit number. I dunno how to check >>> that is the case though. Your log entries seem to be silent about that. >>> >>> My 2 cents, >>> --- >>> *B. R.* >>> >>> >>> On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: >>> >>>> >>>> Hi All, >>>> >>>> I understand that this is a generic error, but it has been frustrating >>>> trying to solve this issue and i'm not able to find an answer anywhere. >>>> basically i have an application which is running fine using apache, but we >>>> wanted to try nginx/php5-fpm: >>>> >>>> some parts of my application has this connection reset issue, sometimes >>>> it works but inconsistent. >>>> if it does not work, i restart php5-fpm and it will work again, but >>>> after sometime it will have the same issue. >>>> >>>> i'm using: >>>> >>>> # nginx -v >>>> nginx version: nginx/1.4.0 >>>> >>>> # php5-fpm -v >>>> PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) >>>> Copyright (c) 1997-2013 The PHP Group >>>> Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies >>>> >>>> when i enable debug this is the only thing i can see; >>>> >>>> [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), line >>>> 72: received SIGCHLD >>>> [08-May-2013 18:08:32.545884] WARNING: pid 358359, fpm_children_bury(), >>>> line 252: [pool legacy] child 358423 exited with code 3 after 342.958315 >>>> seconds from start >>>> [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), >>>> line 421: [pool legacy] child 359398 started >>>> [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), line >>>> 411: event module triggered 1 events >>>> >>>> >>>> i have tried different config changes like using static instead of >>>> dynamic..increase max_request..increase child ...increase server..etc. >>>> >>>> one thing i really need is to identify what is causing that connection >>>> peer but logs is not really helping. i tried strace and it still did not >>>> show me anything. >>>> >>>> any other way to debug or identify what is causing this issue? totally >>>> clueless right now. >>>> >>>> thank you in advanced. >>>> >>>> Regards, >>>> Ron >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed May 8 12:33:07 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 May 2013 08:33:07 -0400 Subject: 403 Forbidden error with Mediawiki In-Reply-To: References: Message-ID: Why would Nginx automatically redirect mydomain.com do mydomain.com/w/ if you don't tell it to do so? Your error log shows that you are trying to list the files of your root directory (so it means you don't have an index file to be served there), which is forbidden by default Nginx configuration to avoid unwanted directory listings (which is a great idea, as opposed to Apache's defaults). Try to include a redirection (see the rewrite directive configuration) so that all your requests to your domain root are forwarded to the /w/ subdirectory. --- *B. R.* On Wed, May 8, 2013 at 8:06 AM, Stylw wrote: > I'm having a frustrating experience trying to get nginx to work properly > with mediawiki's pretty urls. For some reason my configuration won?t detect > that my mediawiki installation is installed in a folder above root (said > folder is called /w/) and appears to be throwing out 403 forbidden errors > when trying to access my site. > > The script itself works properly, and I have no problem with accessing the > wiki installation my using mydomain.com/w/ - but if I don?t manually input > that /w/ folder in the website address nginx will throw a 403 forbidden > error at me. I should also mention that root is empty apart from that > folder, which is why I suspect nginx is throwing that 403 message at me. > > I don?t want to flood people?s emails with my giant nginx configuration, so > I?ll include links to a pastebin dump which has them. I?ve also removed any > personal information which could hint at where my site is hosted and/or > what > the domain is. > > Nginx error log: http://pastebin.com/BZGWurYZ > Niginx configuration: http://pastebin.com/mFdHbEzm > Mediawiki settings: http://pastebin.com/nQuaZBdQ > > I hope somebody can figure out where the fault is - I've been ripping out > my > hair over it. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239004,239004#msg-239004 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed May 8 12:35:03 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 May 2013 08:35:03 -0400 Subject: 104: Connection reset by peer In-Reply-To: References: Message-ID: You were right to seek for answers somewhere else than configuration, then... ;o) Glad you found you answer. I hope you'll find your way around that crash. --- *B. R.* On Wed, May 8, 2013 at 8:31 AM, ron ramos wrote: > Hi B.R. > > To answer your question i'm only sending max 10 connection to NGINX on my > load balancer. and even if i took it out of the load balancer > and there is only me accessing the server, it still happens. thanks! > > Regards, > Ron > > > > > On Wed, May 8, 2013 at 8:11 PM, B.R. wrote: > >> Do you have some information on the number of concurrent connections? >> Since you already played with the 'max_requests' parameter too, it seems >> not to be the reason of the trouble. But better be safe than sorry. >> >> If your actual number of connections/second is greater than what the >> configuration is expecting then you'll have your answer. But you don't >> provide information allowing to decide on this, and it seems you tried >> blind changes only. >> Could you provide your input requests rate? >> --- >> *B. R.* >> >> >> On Wed, May 8, 2013 at 7:58 AM, ron ramos wrote: >> >>> hi, >>> >>> i've seen that info as well ( yes i tried searching for answers as >>> mentioned ) and it did not help me unfortunately. >>> i've increased from 500 to 1000 to 10000. increase children servers etc. >>> >>> regards, >>> ron >>> >>> >>> On Wed, May 8, 2013 at 7:42 PM, B.R. wrote: >>> >>>> After a very long search on Google (almost 15s, including keyboard >>>> input), I found astonishing help, based on the information you provided. >>>> >>>> About the FPM children burying, I found a resource on StackOverflow >>>> linking back to the Nginx forum (ML archive): >>>> >>>> http://stackoverflow.com/questions/2551185/tons-of-fpm-children-bury-in-php-fpm-log-php-5-2-13php-fpm-0-5-13-nginx-0 >>>> >>>> It seems, at first glance, that the children bury and its respawn works >>>> as intended, if you reach the requests limit number. I dunno how to check >>>> that is the case though. Your log entries seem to be silent about that. >>>> >>>> My 2 cents, >>>> --- >>>> *B. R.* >>>> >>>> >>>> On Wed, May 8, 2013 at 7:29 AM, ron ramos wrote: >>>> >>>>> >>>>> Hi All, >>>>> >>>>> I understand that this is a generic error, but it has been frustrating >>>>> trying to solve this issue and i'm not able to find an answer anywhere. >>>>> basically i have an application which is running fine using apache, but we >>>>> wanted to try nginx/php5-fpm: >>>>> >>>>> some parts of my application has this connection reset issue, >>>>> sometimes it works but inconsistent. >>>>> if it does not work, i restart php5-fpm and it will work again, but >>>>> after sometime it will have the same issue. >>>>> >>>>> i'm using: >>>>> >>>>> # nginx -v >>>>> nginx version: nginx/1.4.0 >>>>> >>>>> # php5-fpm -v >>>>> PHP 5.4.14-1~precise+1 (fpm-fcgi) (built: Apr 11 2013 17:18:51) >>>>> Copyright (c) 1997-2013 The PHP Group >>>>> Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies >>>>> >>>>> when i enable debug this is the only thing i can see; >>>>> >>>>> [08-May-2013 18:08:32.545758] DEBUG: pid 358359, fpm_got_signal(), >>>>> line 72: received SIGCHLD >>>>> [08-May-2013 18:08:32.545884] WARNING: pid 358359, >>>>> fpm_children_bury(), line 252: [pool legacy] child 358423 exited with code >>>>> 3 after 342.958315 seconds from start >>>>> [08-May-2013 18:08:32.548943] NOTICE: pid 358359, fpm_children_make(), >>>>> line 421: [pool legacy] child 359398 started >>>>> [08-May-2013 18:08:32.549023] DEBUG: pid 358359, fpm_event_loop(), >>>>> line 411: event module triggered 1 events >>>>> >>>>> >>>>> i have tried different config changes like using static instead of >>>>> dynamic..increase max_request..increase child ...increase server..etc. >>>>> >>>>> one thing i really need is to identify what is causing that connection >>>>> peer but logs is not really helping. i tried strace and it still did not >>>>> show me anything. >>>>> >>>>> any other way to debug or identify what is causing this issue? totally >>>>> clueless right now. >>>>> >>>>> thank you in advanced. >>>>> >>>>> Regards, >>>>> Ron >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 8 12:48:00 2013 From: nginx-forum at nginx.us (Stylw) Date: Wed, 08 May 2013 08:48:00 -0400 Subject: 403 Forbidden error with Mediawiki In-Reply-To: References: Message-ID: Well, I feel pretty stupid now. I used an automatic script to generate that config and I just assumed that the rewrite would be included. Thanks & Oops. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239004,239011#msg-239011 From griscom at suitable.com Wed May 8 13:32:44 2013 From: griscom at suitable.com (Daniel Griscom) Date: Wed, 8 May 2013 09:32:44 -0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? Message-ID: I'm an nginx newbie, and need use use it as a front end for a website that also handles websocket connections. I have the configuration set up so that requests to a specific URI match a location section, which then proxies the request to the websocket back end server, and it all works. (Very cool.) However, I was wondering if, rather than detecting requests to a specific location, I could proxy all "ws://" or "wss:// requests, independent of the URI being requested. Is there a way to proxy all requests with a given protocol? Thanks, Dan -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From nginx-forum at nginx.us Wed May 8 14:35:41 2013 From: nginx-forum at nginx.us (jonas) Date: Wed, 08 May 2013 10:35:41 -0400 Subject: nginx security advisory (CVE-2013-2028) In-Reply-To: <20130507113021.GH69760@mdounin.ru> References: <20130507113021.GH69760@mdounin.ru> Message-ID: <5119a60a61f0373e0bcf527b8240fb98.NginxMailingListEnglish@forum.nginx.org> Hello, I use nginx 1.1.19, latest version from ubuntu repository. Anyone knows if Is it secure to use the latest verison from ubuntu repository? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238946,239015#msg-239015 From info at pkern.at Wed May 8 14:42:35 2013 From: info at pkern.at (Patrik Kernstock) Date: Wed, 8 May 2013 16:42:35 +0200 Subject: AW: nginx security advisory (CVE-2013-2028) In-Reply-To: <5119a60a61f0373e0bcf527b8240fb98.NginxMailingListEnglish@forum.nginx.org> References: <20130507113021.GH69760@mdounin.ru> <5119a60a61f0373e0bcf527b8240fb98.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0ed701ce4bfa$48836180$d98a2480$@pkern.at> Hello, the security leak is only affected in nginx 1.3.9 and 1.4.0. So just find out which version is currently in the ubuntu repository and decide if you can update or not. Kind regards, Patrik -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von jonas Gesendet: Mittwoch, 08. Mai 2013 16:36 An: nginx at nginx.org Betreff: Re: nginx security advisory (CVE-2013-2028) Hello, I use nginx 1.1.19, latest version from ubuntu repository. Anyone knows if Is it secure to use the latest verison from ubuntu repository? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238946,239015#msg-239015 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Wed May 8 14:50:25 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 May 2013 10:50:25 -0400 Subject: nginx security advisory (CVE-2013-2028) In-Reply-To: <0ed701ce4bfa$48836180$d98a2480$@pkern.at> References: <20130507113021.GH69760@mdounin.ru> <5119a60a61f0373e0bcf527b8240fb98.NginxMailingListEnglish@forum.nginx.org> <0ed701ce4bfa$48836180$d98a2480$@pkern.at> Message-ID: I would add to Patrick answer the following: - 1.1.19 is a development version. IMHO it is always better to prefer stable in production environments. 1.2.8 or 1.4.1 depending on your needs/requirements. - Check the changes from 1.2 or 1.4 to decide what is better for you (there are only few security alerts, most of entries are bugfixes) - Consider using nginx packages (available for Ubuntu), which will keep you nginx updates to the most recent version of your choice (stable or 'mainline' which I suppose is development? or maybe old-stable 1.2.8?) via aptitude Hope that'll help --- *B. R.* On Wed, May 8, 2013 at 10:42 AM, Patrik Kernstock wrote: > Hello, > > the security leak is only affected in nginx 1.3.9 and 1.4.0. So just find > out which version is currently in the ubuntu repository and decide if you > can update or not. > > Kind regards, > Patrik > > -----Urspr?ngliche Nachricht----- > Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag > von > jonas > Gesendet: Mittwoch, 08. Mai 2013 16:36 > An: nginx at nginx.org > Betreff: Re: nginx security advisory (CVE-2013-2028) > > Hello, > > I use nginx 1.1.19, latest version from ubuntu repository. > Anyone knows if Is it secure to use the latest verison from ubuntu > repository? > > thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,238946,239015#msg-239015 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kay.hayen at gmail.com Wed May 8 15:57:00 2013 From: kay.hayen at gmail.com (Kay Hayen) Date: Wed, 8 May 2013 17:57:00 +0200 Subject: Migrating existing Apache config Message-ID: Hello, due to low memory on my virtual machine, I was researching migrating away from Apache. What kind of surprises me, and pardon my ignorance, I didn't lookup much beyond the FAQ, and a few Wiki links, and web searches. I was hoping to find scripts that migrate my Apache configuration files into Nginx configuration files, so I could be up and running kind of immediately. There are a few virtual domains, and a bunch of existing configuration, that I would like to migrate, e.g. URL rewrites, caching policies, proxies, etc. that I would not want to dive into, just now, as it's not exactly trivial, to do all of that, check it all, and to not make a lot of mistakes. Don't get me wrong, I don't think I shouldn't learn the Nginx configuration format, it's just that I would as a starting point, like to use a working one that resembles what I had. So if it's possible, I would welcome pointers... thanks in advance! Yours, Kay From andrejaenisch at googlemail.com Wed May 8 16:05:47 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Wed, 8 May 2013 18:05:47 +0200 Subject: Migrating existing Apache config In-Reply-To: References: Message-ID: 2013/5/8 Kay Hayen : > Hello, > I was hoping to find scripts that migrate my Apache configuration files into Nginx configuration files, so I could be up and running kind of immediately. Some tools were reported on the mailing list earlier, but I haven't checked them. Like with all automatically tools you should take a look at it before putting the result in production. I just picked a .htaccess converter for you: http://winginx.com/htaccess But I would really recommend you to spend a weekend (as of tomorrow until Sunday? ^^ At least here in Germany we have a bank holiday tomorrow ?) to write a simple config, test it and expand it to your needs. It will help you debugging later, I guess. Regards, Andre From miguelmclara at gmail.com Wed May 8 16:14:16 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Wed, 8 May 2013 17:14:16 +0100 Subject: Migrating existing Apache config In-Reply-To: References: Message-ID: I was just about to suggest the same tool to convert .htaccess.... but has warned in the website it does not convert everything and also it does not check syntax errors, so It might be even worst. I would prefer to do it "from scratch"... You might find some useful help here: http://forum.nginx.org/list.php?9 Good luck! On Wed, May 8, 2013 at 5:05 PM, Andre Jaenisch wrote: > 2013/5/8 Kay Hayen : > > Hello, > > > I was hoping to find scripts that migrate my Apache configuration files > into Nginx configuration files, so I could be up and running kind of > immediately. > > Some tools were reported on the mailing list earlier, but I haven't > checked them. > Like with all automatically tools you should take a look at it before > putting the result in production. > I just picked a .htaccess converter for you: http://winginx.com/htaccess > > But I would really recommend you to spend a weekend (as of tomorrow > until Sunday? ^^ At least here in Germany we have a bank holiday > tomorrow ?) to write a simple config, test it and expand it to your > needs. It will help you debugging later, I guess. > > Regards, Andre > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Wed May 8 17:17:23 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 8 May 2013 10:17:23 -0700 Subject: ngx_event_openssl_stapling.c vs. openssl version Message-ID: There seems to be a version dependency on opnessl in ngnix1.4. I can build it on one platform but not the other where there's slightly older version of openssl header files. Specifically, ngx_event_openssl_stapling.c references some constants which are only defined in newer version of tls1.h. It would be nice to make the code working across different versions of libraries. Is that a known issue? - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 8 17:35:50 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 May 2013 21:35:50 +0400 Subject: ngx_event_openssl_stapling.c vs. openssl version In-Reply-To: References: Message-ID: <20130508173550.GE69760@mdounin.ru> Hello! On Wed, May 08, 2013 at 10:17:23AM -0700, Alder Network wrote: > There seems to be a version dependency on opnessl in ngnix1.4. > I can build it on one platform but not the other where there's slightly > older version of openssl header files. Specifically, > ngx_event_openssl_stapling.c references some constants which are > only defined in newer version of tls1.h. It would be nice to make the > code working across different versions of libraries. Is that a known issue? Are you seeing build failures? If yes, with which version of OpenSSL? It is expected to always compile fine, degrading to "ssl_stapling ignored, not supported" warnings if OpenSSL is too old. -- Maxim Dounin http://nginx.org/en/donation.html From aldernetwork at gmail.com Wed May 8 17:51:00 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 8 May 2013 10:51:00 -0700 Subject: ngx_event_openssl_stapling.c vs. openssl version In-Reply-To: <20130508173550.GE69760@mdounin.ru> References: <20130508173550.GE69760@mdounin.ru> Message-ID: It's compilation error. I don't have the exact error message at hand, but it's referencing a macro that's not defined in newer versions of tls1.h. but not in slightly older versions of tls1.h. On Wed, May 8, 2013 at 10:35 AM, Maxim Dounin wrote: > Hello! > > On Wed, May 08, 2013 at 10:17:23AM -0700, Alder Network wrote: > > > There seems to be a version dependency on opnessl in ngnix1.4. > > I can build it on one platform but not the other where there's slightly > > older version of openssl header files. Specifically, > > ngx_event_openssl_stapling.c references some constants which are > > only defined in newer version of tls1.h. It would be nice to make the > > code working across different versions of libraries. Is that a known > issue? > > Are you seeing build failures? If yes, with which version of > OpenSSL? > > It is expected to always compile fine, degrading to "ssl_stapling > ignored, not supported" warnings if OpenSSL is too old. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 8 18:42:51 2013 From: nginx-forum at nginx.us (locojohn) Date: Wed, 08 May 2013 14:42:51 -0400 Subject: Migrating existing Apache config In-Reply-To: References: Message-ID: <6679c6cbf2df29bddca0712409a9fcb0.NginxMailingListEnglish@forum.nginx.org> I would suggest not to use any ready-to-use-conversion-tools but rather learn the differences and convert the hosts manually one by one. It took me a couple of days to convert all of our sites to nginx and two weeks more to realize the differences of Apache and Nginx handling very specific cases, such as PHP environment in particular scenario where old PHP scripts would rely on SCRIPT_URI/SCRIPT_URL that is not available in nginx, or scripts using PATH_INFO/PATH_TRANSLATED, and these have to be correctly set. If you do not want to spend valuable time for learning and adopting, you may always rely on reliable users with experience ;) Andrejs loco (at) andrews.lv Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239021,239031#msg-239031 From g.plumb at gmail.com Wed May 8 19:49:50 2013 From: g.plumb at gmail.com (Gee) Date: Wed, 8 May 2013 20:49:50 +0100 Subject: Mono + nginx (OpenBSD 5.3) Message-ID: Hi all So this is what I tried tonight: For sanity's sake: chown root /tmp/fastcgi-mono-socket chmod 777 /tmp/fastcgi-mono-socket To see permissions on the socket ls -la /tmp/fastcgi-mono-socket: Output: srwxrwxrwx 1 root wheel 0 May 8 16:01 fastcgi-mono-socket grep wheel /etc/group To see contents of group 'wheel': cat /etc/group | grep --regex "^xxx:.*" | awk -F: '{print $4}' Output: root I'm pretty much out of ideas now! Does anyone have any ideas? Thanks G -------------- next part -------------- An HTML attachment was scrubbed... URL: From kay.hayen at gmail.com Wed May 8 19:57:07 2013 From: kay.hayen at gmail.com (Kay Hayen) Date: Wed, 8 May 2013 21:57:07 +0200 Subject: Migrating existing Apache config In-Reply-To: <6679c6cbf2df29bddca0712409a9fcb0.NginxMailingListEnglish@forum.nginx.org> References: <6679c6cbf2df29bddca0712409a9fcb0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, > I would suggest not to use any ready-to-use-conversion-tools but rather > learn the differences and convert the hosts manually one by one. It took me > a couple of days to convert all of our sites to nginx and two weeks more to > realize the differences of Apache and Nginx handling very specific cases, > such as PHP environment in particular scenario where old PHP scripts would > rely on SCRIPT_URI/SCRIPT_URL that is not available in nginx, or scripts > using PATH_INFO/PATH_TRANSLATED, and these have to be correctly set. I am glad to say that I use only static pages with Nikola, and that no php is on my site at all. I need reverse proxies though, and virtual hosts for git, and so on. > If you do not want to spend valuable time for learning and adopting, you may > always rely on reliable users with experience ;) I am not so sure, if it's really worth it. But I think I am sold on trying it out anyway, and will learn how to do things then. Lets see, how that works out. Yours, Kay From aldernetwork at gmail.com Wed May 8 22:35:58 2013 From: aldernetwork at gmail.com (Alder Network) Date: Wed, 8 May 2013 15:35:58 -0700 Subject: Websocket proxy Message-ID: This must have been discussed before but I am new to nginx and this forum. I am upgrading to 1.4 to use its websocket proxy feature. Say I have a websocket server running at port 81, so I want to forward all websocket packets to port 81, and process the rest at port 80. Somehow the following conf doesn't work, anything missing? server { listen [::]:80 ipv6only=off; location / { regular_http_processing_directive; if ($http_upgrade = "websocket") { proxy_pass http://localhost:81; } proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection upgrade; } } - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.black at openquery.com Thu May 9 00:38:54 2013 From: daniel.black at openquery.com (Daniel Black) Date: Thu, 9 May 2013 10:38:54 +1000 (EST) Subject: nginx-1.4 proxy requests being continious In-Reply-To: <327985727.5700.1368059263332.JavaMail.root@zimbra.lentz.com.au> Message-ID: <1211555988.5702.1368059934009.JavaMail.root@zimbra.lentz.com.au> A request for /img/file_doesnt_exist.jpg results in the backend server (192.168.129.90) getting continuous requests for the same file (which doesn't exist there either so 404 each time), while the original requester waits and nginx keeps asking the backend the same. I'm using the nginx-1.4.1 from the debian squeeze repository. Is there a better way do to this config? The aim for for all web servers to have the same config so a resource that aren't synced yet still get served a response if it exists somewhere but without the requests ending up in a circular loop. My current, hopefully not too cut down, config is: upstream imgweb_other { server 192.168.129.90; server 173.230.136.6 backup; } server { proxy_read_timeout 15; proxy_connect_timeout 3; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404; location ~ ^/img/(.*) { expires 2592000; add_header Cache-Control public; alias /var/www/live_site_resources/$1; error_page 404 = @imgweb_other; } location @imgweb_other { # we only want to fallback once so use user_agent as a flag if ( $http_user_agent = IMGWEB ) { return 404; } proxy_pass http://imgweb_other; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header User-Agent IMGWEB; } } From nginx-forum at nginx.us Thu May 9 00:50:57 2013 From: nginx-forum at nginx.us (Rancor) Date: Wed, 08 May 2013 20:50:57 -0400 Subject: Howto set geoip_country for IPv4 and IPv6 databases? In-Reply-To: References: <20130430211709.GM19561@lo0.su> Message-ID: Hi, > Seems that the libgeoip1 in debian squeeze in version > 1.4.7~beta6+dfsg-1 doesn't support this. Will test this again when > wheezy is released at the next weekend. just want to let you know that this is now working on debian wheezy with libgeoip1 version 1.4.8+dfsg-3 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,235108,239043#msg-239043 From edho at myconan.net Thu May 9 03:32:42 2013 From: edho at myconan.net (Edho Arief) Date: Thu, 9 May 2013 12:32:42 +0900 Subject: Location support for multiple URLs In-Reply-To: References: Message-ID: On Sun, Apr 7, 2013 at 12:34 AM, Typlo wrote: > > Hello, > I would like to use the FastCGI cache feature of nginx for my web application. But I need to use it only for a set of URL. > > I would like to use it for the following locations: > > http://domain.com/index.php?act=detail&ID=[ANY ID HERE] > > Example: > http://domain.com/index.php?act=detail&id=o2Zimg > > And so on. > What should I place in the location directive to cache only those URLs? > I can't figure it out on the nginx wiki. > > Also, I would like to replace the Cache Control and Pragma headers set by my PHP application, can I use add_headers directive? Or I would have to add a 3rd party module, like more_http_headers? I use nginx from PPA(Ubuntu), so for adding more_http_headers I would have to build it :/ > There's good reason people build urls like http://domain.com/detail/2Zimg instead of what you're using (namely, it's difficult to configure its caching). (this email probably doesn't help, yes) -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Thu May 9 09:01:17 2013 From: nginx-forum at nginx.us (sravanakk) Date: Thu, 09 May 2013 05:01:17 -0400 Subject: How to read all client chain certificates from fastcgi request. Message-ID: <2ed2c11800b84dee073b50abfc8e9f66.NginxMailingListEnglish@forum.nginx.org> Hi, I configured my nginx with openssl as below: server { listen 443; server_name localhost; ssl on; ssl_certificate ssl_certificate_key ssl_client_certificate ssl_verify_client on; ssl_verify_depth 3; ssl_session_cache shared:SSL:64k; ssl_session_timeout 10m; ssl_ciphers HIGH:!aNULL:!MD5; } >From client code I am sending client certificate as bellow RootCA -> IntermediateCA -> Client By using curl : curl_easy_setopt(curl, CURLOPT_SSLCERTTYPE, "PEM"); curl_easy_setopt(curl,CURLOPT_SSLCERT,"ClientCom.crt"); // This file having three certificates if (pPassphrase) curl_easy_setopt(curl, CURLOPT_KEYPASSWD, pPassphrase); curl_easy_setopt(curl, CURLOPT_SSLKEYTYPE, "PEM"); curl_easy_setopt(curl,CURLOPT_SSLKEY,"ClientKey.pem"); curl_easy_setopt(curl,CURLOPT_CAINFO,"RootCA.crt"); My Server Code: As soon as nginx server gets any request from client, below call would be triggered from my Server as below. ReadTLSSessionData (FCGX_Request *request) { FCGX_GetParam("SSL_CLIENT_RAW_CERT", request->envp); FCGX_GetParam("SSL_CLIENT_CERT", request->envp); } Here I am receiving only one certificate from these environmental variables. But, I want to read all three certificates which client sending in PEM format. Then I have to verify the extensions. What is the environmental variable which gives all certificates from client??? Below is my fastcgi.conf file: fastcgi_param SSL_CLIENT_CERT $ssl_client_cert; fastcgi_param SSL_CLIENT_RAW_CERT $ssl_client_raw_cert; fastcgi_param SSL_CLIENT_S_DN $ssl_client_s_dn; fastcgi_param SSL_CLIENT_I_DN $ssl_client_i_dn; fastcgi_param SSL_CLIENT_SERIAL $ssl_client_serial; Can anybody help me in this aspect!!! Regards, Sravana Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239048,239048#msg-239048 From nginx-forum at nginx.us Thu May 9 09:37:26 2013 From: nginx-forum at nginx.us (lupin) Date: Thu, 09 May 2013 05:37:26 -0400 Subject: Disable automatic urlencoding with proxy_pass Message-ID: Hi, Any way to disable urlencode on nginx with proxy_pass? Following is my config and as suggested on google I already removed the "/" URI part of of this "proxy_pass http://127.0.0.1:8090;" Any idea how to proceed with this? server { ### server port and name ### listen 10.0.0.1:443 ssl; server_name svn.server; ### SSL cert files ### ssl_certificate /etc/pki/tls/certs/svn.server.cert; ssl_certificate_key /etc/pki/tls/private/svn.server.key; ssl_session_cache shared:SSL:10m; location / { proxy_pass http://127.0.0.1:8090; proxy_redirect off; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } ### Set headers #### proxy_set_header Host $http_host:$proxy_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Destination $fixed_destination; } } Thanks lupin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239049,239049#msg-239049 From appa at perusio.net Thu May 9 13:09:37 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 9 May 2013 15:09:37 +0200 Subject: Location support for multiple URLs In-Reply-To: References: Message-ID: You can do that. At the http level set a map directive like: map $arg_act$arg_ID $no_cache { default 1; ~*detail[[:alnum:]]+ 0; # assuming the ID is alphanumeric } Then add: fastcgi_cache_bypass $no_cache; fastcgi_no_cache $no_cache; to your FastCGI cache configuration. As per the headers I don't quite understand what you want to do. Do you want the FastCGI cache to ignore the Cache-Control headers set by the application? ----appa On Sat, Apr 6, 2013 at 5:34 PM, Typlo wrote: > Hello, > I would like to use the FastCGI cache feature of nginx for my web > application. But I need to use it only for a set of URL. > > I would like to use it for the following locations: > > *http://domain.com/index.php?act=detail&ID=[ANY ID HERE]* > > Example: > http://domain.com/index.php?act=detail&id=o2Zimg > > And so on. > What should I place in the location directive to cache only those URLs? > I can't figure it out on the nginx wiki. > > Also, I would like to replace the Cache Control and Pragma headers set by > my PHP application, can I use add_headers directive? Or I would have to add > a 3rd party module, like more_http_headers? I use nginx from PPA(Ubuntu), > so for adding more_http_headers I would have to build it :/ > > Greetings from Antarctica. > Thanks. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Thu May 9 18:19:32 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 9 May 2013 20:19:32 +0200 Subject: Websocket proxy In-Reply-To: References: Message-ID: At the http level: map $http_upgrade $connection_upgrade { default upgrade; '' close; } map $connection_upgrade $proxy_upstream_port { upgrade 81; close 80; } Then at the location do: location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_pass http://localhost:$proxy_upstream_port; } Partly taken from http://nginx.org/en/docs/http/websocket.html. ----appa On Thu, May 9, 2013 at 12:35 AM, Alder Network wrote: > This must have been discussed before but I am new to nginx > and this forum. > > I am upgrading to 1.4 to use its websocket proxy feature. > Say I have a websocket server running at port 81, so I want > to forward all websocket packets to port 81, and process > the rest at port 80. > > Somehow the following conf doesn't work, anything missing? > > server { > listen [::]:80 ipv6only=off; > > location / { > regular_http_processing_directive; > > if ($http_upgrade = "websocket") { > proxy_pass http://localhost:81; > } > > proxy_http_version 1.1; > proxy_set_header Upgrade websocket; > proxy_set_header Connection upgrade; > } > } > > - Alder > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From griscom at suitable.com Thu May 9 18:45:59 2013 From: griscom at suitable.com (Daniel Griscom) Date: Thu, 9 May 2013 14:45:59 -0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: References: Message-ID: ... bump? (thanks, Dan) At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: >I'm an nginx newbie, and need use use it as a front end for a >website that also handles websocket connections. I have the >configuration set up so that requests to a specific URI match a >location section, which then proxies the request to the websocket >back end server, and it all works. (Very cool.) > >However, I was wondering if, rather than detecting requests to a >specific location, I could proxy all "ws://" or "wss:// requests, >independent of the URI being requested. > >Is there a way to proxy all requests with a given protocol? > > >Thanks, >Dan > >-- >Daniel T. Griscom griscom at suitable.com >Suitable Systems http://www.suitable.com/ >1 Centre Street, Suite 204 (781) 665-0053 >Wakefield, MA 01880-2400 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From ussray_00 at yahoo.com Thu May 9 19:55:22 2013 From: ussray_00 at yahoo.com (Russ Lavoy) Date: Thu, 9 May 2013 12:55:22 -0700 (PDT) Subject: HTTP Basic Auth question In-Reply-To: <20130501214555.GG27406@craic.sysops.org> References: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> <20130501214555.GG27406@craic.sysops.org> Message-ID: <1368129322.39046.YahooMailNeo@web161002.mail.bf1.yahoo.com> Hello, Sorry for the long delay. ? I have tried the following configuration which does not seem to work at all. proxy_hide_header Authorization; proxy_set_header Authorization "$remote_user";| I can still sniff the traffic on lo and get the base64 user:pass. ?The interesting thing is I do not see the Authorization header being sent to the django app whatsoever. ?Is there a way I can totally remove the header even at the loop back level so it is not able to get intercepted? Thanks, Russ ----- Original Message ----- From: Francis Daly To: nginx at nginx.org Cc: Sent: Wednesday, May 1, 2013 4:45 PM Subject: Re: HTTP Basic Auth question On Wed, May 01, 2013 at 01:17:41PM -0400, B.R. wrote: Hi there, > To pass the nginx user to a fastcgi backend (PHP), I have to explicitly > specify it using the following directive: > fastcgi_param? MY_USER? ? ? $remote_user; > > I suppose you can do the same with proxy_pass? That's how I'd do it -- probably proxy_set_header if the python application is accessed using proxy_pass. > I dunno how to remove an automatically forwarded parameter though... Maybe > overwriting it with an empty string? The password is in the http header Authorization, so using proxy_hide_header to avoid sending that should be enough. > On Wed, May 1, 2013 at 10:26 AM, Russ Lavoy wrote: > > I am running nginx as a reverse proxy to a python application.? I am > > wondering how I would be able to pass ONLY the user account and not the > > password.? Can this be done? As above: how are the user and pass currently sent? It will be by "fastcgi_pass" or "proxy_pass" or something similar. Use the matching "_hide_header" directive on the correct header to avoid sending it. How do you want the user to be sent? Use the variable $remote_user and the matching "_set_header" or "_param" directive to send the provided username. ??? f -- Francis Daly? ? ? ? francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Thu May 9 21:14:04 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 10 May 2013 09:14:04 +1200 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: References: Message-ID: <1368134044.19159.100.camel@steve-new> The scheme is available as... $scheme On Thu, 2013-05-09 at 14:45 -0400, Daniel Griscom wrote: > ... bump? > > (thanks, > Dan) > > > At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: > >I'm an nginx newbie, and need use use it as a front end for a > >website that also handles websocket connections. I have the > >configuration set up so that requests to a specific URI match a > >location section, which then proxies the request to the websocket > >back end server, and it all works. (Very cool.) > > > >However, I was wondering if, rather than detecting requests to a > >specific location, I could proxy all "ws://" or "wss:// requests, > >independent of the URI being requested. > > > >Is there a way to proxy all requests with a given protocol? > > > > > >Thanks, > >Dan > > > >-- > >Daniel T. Griscom griscom at suitable.com > >Suitable Systems http://www.suitable.com/ > >1 Centre Street, Suite 204 (781) 665-0053 > >Wakefield, MA 01880-2400 > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz MSN: steve at greengecko.co.nz Skype: sholdowa From aldernetwork at gmail.com Thu May 9 21:51:21 2013 From: aldernetwork at gmail.com (Alder Network) Date: Thu, 9 May 2013 14:51:21 -0700 Subject: Websocket proxy In-Reply-To: References: Message-ID: Thanks for the recipe. Just tried and now I am not able to get regular HTTP working on port 80. Has anybody ever got nginx websocket proxy working? Thanks, - Alder On Thu, May 9, 2013 at 11:19 AM, Ant?nio P. P. Almeida wrote: > At the http level: > > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } > > map $connection_upgrade $proxy_upstream_port { > upgrade 81; > close 80; > } > > Then at the location do: > > location / { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_pass http://localhost:$proxy_upstream_port; > } > > Partly taken from http://nginx.org/en/docs/http/websocket.html. > > ----appa > > > > On Thu, May 9, 2013 at 12:35 AM, Alder Network wrote: > >> This must have been discussed before but I am new to nginx >> and this forum. >> >> I am upgrading to 1.4 to use its websocket proxy feature. >> Say I have a websocket server running at port 81, so I want >> to forward all websocket packets to port 81, and process >> the rest at port 80. >> >> Somehow the following conf doesn't work, anything missing? >> >> server { >> listen [::]:80 ipv6only=off; >> >> location / { >> regular_http_processing_directive; >> >> if ($http_upgrade = "websocket") { >> proxy_pass http://localhost:81; >> } >> >> proxy_http_version 1.1; >> proxy_set_header Upgrade websocket; >> proxy_set_header Connection upgrade; >> } >> } >> >> - Alder >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 9 22:24:46 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 9 May 2013 23:24:46 +0100 Subject: HTTP Basic Auth question In-Reply-To: <1368129322.39046.YahooMailNeo@web161002.mail.bf1.yahoo.com> References: <1367418360.71691.YahooMailNeo@web161003.mail.bf1.yahoo.com> <20130501214555.GG27406@craic.sysops.org> <1368129322.39046.YahooMailNeo@web161002.mail.bf1.yahoo.com> Message-ID: <20130509222446.GU27406@craic.sysops.org> On Thu, May 09, 2013 at 12:55:22PM -0700, Russ Lavoy wrote: Hi there, > I have tried the following configuration which does not seem to work at all. > > proxy_hide_header Authorization; > > proxy_set_header Authorization "$remote_user";| What did you do; what did you see; what did you expect to see? > I can still sniff the traffic on lo and get the base64 user:pass. ?The interesting thing is I do not see the Authorization header being sent to the django app whatsoever. ?Is there a way I can totally remove the header even at the loop back level so it is not able to get intercepted? > I don't understand what it is that you are trying to do, that you have not yet done. You seem to say that you do see the Authorization header and that you don't see the Authorization header, so I presume I'm misreading something. Can you provide a simple nginx configuration that I can use to replicate whatever the problem is? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu May 9 22:43:15 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 9 May 2013 23:43:15 +0100 Subject: Disable automatic urlencoding with proxy_pass In-Reply-To: References: Message-ID: <20130509224315.GV27406@craic.sysops.org> On Thu, May 09, 2013 at 05:37:26AM -0400, lupin wrote: Hi there, > Any way to disable urlencode on nginx with proxy_pass? Following is my > config and as suggested on google I already removed the "/" URI part of > of this "proxy_pass http://127.0.0.1:8090;" Why do you think that urlencode is or is not disabled? When I test with a similar (but http-only) config, I see the normalised (= decoded) url passed when I use "proxy_pass http://127.0.0.1:10080/;", and the original (= not-decoded) url passed when I use "proxy_pass http://127.0.0.1:10080;", exactly as per http://nginx.org/r/proxy_pass. What precisely do you do to see something other than that? f -- Francis Daly francis at daoine.org From matthieu.tourne at gmail.com Thu May 9 23:02:43 2013 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Thu, 9 May 2013 16:02:43 -0700 Subject: Disable automatic urlencoding with proxy_pass In-Reply-To: References: Message-ID: On Thu, May 9, 2013 at 2:37 AM, lupin wrote: > Hi, > > Any way to disable urlencode on nginx with proxy_pass? Following is my > config and as suggested on google I already removed the "/" URI part of > of this "proxy_pass http://127.0.0.1:8090;" > > Any idea how to proceed with this? > > I think I might have ran into something similar before. Internally Nginx has 2 ways to deal with this directive : proxy_pass http://127.0.0.1:8090; Either you've never touched the internal uri (no internal redirects for instance) and your outgoing uri is going to be exactly the same as the incoming uri. Internally r->valid_unparsed_uri is set to 1. Or your internal has been modified, and the outgoing uri used by proxy_pass will be re-normalized. The re-normalization process might turn up differences between incoming and outgoing uri. Another way to force Nginx to send out exactly what you want is to do something like this : map '' $seed_uri { default $request_uri; } proxy_pass http://127.0.0.1:8090$seed_uri; This way you're forcing what comes in to be exactly what comes out, but this comes with a price and you won't be able to use internal redirection and will have to modify the content of $seed_uri manually, using if and regexps for instance. Hope that helps, Matthieu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanxe at gmx.net Fri May 10 03:13:42 2013 From: stefanxe at gmx.net (Stefan Xenon) Date: Fri, 10 May 2013 11:13:42 +0800 Subject: proxy doesn't cache Message-ID: <518C65E6.8070400@gmx.net> Hi! I want to use nginx as a caching proxy in front of an OCSP responder. The OCSP requests are transmitted via HTTP POST. Hence, I configured nginx as follows: proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m; server { server_name localhost; location / { proxy_pass http://213.154.225.237:80; #ocsp.cacert.org proxy_cache my-cache; proxy_cache_methods POST; proxy_cache_key "$scheme$proxy_host$uri$request_body"; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } ) I can access the OCSP responder through nginx and responses are received as expected - no issue. The problem is that nginx doesn't cache the responses. Note that OCSP nonces are *not* being sent as part of the request. Using Wireshark and nginx' debug log, I verified that all my requests are identical. How to configure nginx that it caches the responses? Note, I use the following command for testing: openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url http://localhost/ -serial Thanks a lot for your help! Stefan From mdounin at mdounin.ru Fri May 10 09:26:18 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 May 2013 13:26:18 +0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: <1368134044.19159.100.camel@steve-new> References: <1368134044.19159.100.camel@steve-new> Message-ID: <20130510092618.GJ69760@mdounin.ru> Hello! On Fri, May 10, 2013 at 09:14:04AM +1200, Steve Holdoway wrote: > The scheme is available as... $scheme Yes, but WebSocket protocol uses http for handshake. So the scheme will be either "http" or "https". WebSocket requests can be identified based on Upgrade header, i.e. $http_upgrade variable. > > On Thu, 2013-05-09 at 14:45 -0400, Daniel Griscom wrote: > > ... bump? > > > > (thanks, > > Dan) > > > > > > At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: > > >I'm an nginx newbie, and need use use it as a front end for a > > >website that also handles websocket connections. I have the > > >configuration set up so that requests to a specific URI match a > > >location section, which then proxies the request to the websocket > > >back end server, and it all works. (Very cool.) > > > > > >However, I was wondering if, rather than detecting requests to a > > >specific location, I could proxy all "ws://" or "wss:// requests, > > >independent of the URI being requested. > > > > > >Is there a way to proxy all requests with a given protocol? > > > > > > > > >Thanks, > > >Dan > > > > > >-- > > >Daniel T. Griscom griscom at suitable.com > > >Suitable Systems http://www.suitable.com/ > > >1 Centre Street, Suite 204 (781) 665-0053 > > >Wakefield, MA 01880-2400 > > > > > >_______________________________________________ > > >nginx mailing list > > >nginx at nginx.org > > >http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From griscom at suitable.com Fri May 10 18:35:35 2013 From: griscom at suitable.com (Daniel Griscom) Date: Fri, 10 May 2013 14:35:35 -0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: <20130510092618.GJ69760@mdounin.ru> References: <1368134044.19159.100.camel@steve-new> <20130510092618.GJ69760@mdounin.ru> Message-ID: That's great information, but now I need to figure out how to selectively proxy to my websocket backend when $http_upgrade is "websocket". I see the following choices: 1) Have nginx listen at port 80, and proxy all traffic to port XXXX if $http_upgrade is "websocket", or port YYYY if not. Then set up nginx to handle http traffic at port YYYY, and have my websocket backend handle websocket traffic at port XXXX. 2) Investigate the much-maligned "if" statement. ... any other choices? Thoughts? Thanks, Dan At 1:26 PM +0400 5/10/13, Maxim Dounin wrote: >Hello! > >On Fri, May 10, 2013 at 09:14:04AM +1200, Steve Holdoway wrote: > >> The scheme is available as... $scheme > >Yes, but WebSocket protocol uses http for handshake. So the >scheme will be either "http" or "https". WebSocket requests can >be identified based on Upgrade header, i.e. $http_upgrade >variable. > >> >> On Thu, 2013-05-09 at 14:45 -0400, Daniel Griscom wrote: >> > ... bump? >> > >> > (thanks, >> > Dan) >> > >> > >> > At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: >> > >I'm an nginx newbie, and need use use it as a front end for a >> > >website that also handles websocket connections. I have the >> > >configuration set up so that requests to a specific URI match a >> > >location section, which then proxies the request to the websocket >> > >back end server, and it all works. (Very cool.) >> > > >> > >However, I was wondering if, rather than detecting requests to a >> > >specific location, I could proxy all "ws://" or "wss:// requests, >> > >independent of the URI being requested. >> > > >> > >Is there a way to proxy all requests with a given protocol? >> > > >> > > >> > >Thanks, >> > >Dan >> > > >> > >-- >> > >Daniel T. Griscom griscom at suitable.com >> > >Suitable Systems http://www.suitable.com/ >> > >1 Centre Street, Suite 204 (781) 665-0053 >> > >Wakefield, MA 01880-2400 >> > > >> > >_______________________________________________ >> > >nginx mailing list >> > >nginx at nginx.org >> > >http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> >> -- >> Steve Holdoway BSc(Hons) MNZCS >> http://www.greengecko.co.nz >> MSN: steve at greengecko.co.nz >> Skype: sholdowa >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > >-- >Maxim Dounin >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From aldernetwork at gmail.com Sat May 11 01:37:31 2013 From: aldernetwork at gmail.com (Alder Network) Date: Fri, 10 May 2013 18:37:31 -0700 Subject: async notification in Nginx Message-ID: I am writing a Niginx module that from several of worker threads that I created by own, I need to notify Nginx's main thread so the main thread (single-threaded nginx) doesn't need to poll periodically in timer context. What would be the reliable way to do that in Nginx? Anybody advice would be appreciated. - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat May 11 03:52:38 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 10 May 2013 23:52:38 -0400 Subject: Debian package Message-ID: Hello, Is Wheezy supported for Nginx? Would it be possible to use role-based package selection for the Nginx APT source? That means using: deb http://nginx.org/packages/debian/ *stable* nginx deb-src http://nginx.org/packages/debian/ *stable* nginx Instead of: deb http://nginx.org/packages/debian/ *squeeze* nginx deb-src http://nginx.org/packages/debian/ *squeeze* nginx Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.black at openquery.com Sat May 11 06:13:38 2013 From: daniel.black at openquery.com (Daniel Black) Date: Sat, 11 May 2013 16:13:38 +1000 (EST) Subject: nginx-1.4 proxy requests being continious In-Reply-To: <1211555988.5702.1368059934009.JavaMail.root@zimbra.lentz.com.au> Message-ID: <1521922801.5736.1368252818089.JavaMail.root@zimbra.lentz.com.au> Just to prove I'm not making it up (even though I'm having a hard time replicating it). log_format extended '$remote_addr - $remote_user [$time_local] ' '"$request" $status $request_time $body_bytes_sent ' '$upstream_cache_status $upstream_addr $upstream_status $upstream_response_time' '"$http_referer" "$http_user_agent"'; length of log line 3412217 characters (is that a record?) 58.169.18.35 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.1" 499 100.820 0 - 192.168. 129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80, 192.168.129.90:80 (many many pages)... 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404, 404..........., - 0.014, 0.001, 0.000, 0.001, 0.001, 0.000, 0.001, 0.001, 0.000, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001.. , - "-" "Wget/1.13.4 (linux-gnu)" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB" 192.168.131.254 - - [08/May/2013:19:58:13 -0400] "GET //img/covers/medium/587/9781844454581.jpg HTTP/1.0" 404 0.000 169 "-" "IMGWEB ----- Original Message ----- > A request for /img/file_doesnt_exist.jpg results in the backend server > (192.168.129.90) getting continuous requests for the same file (which > doesn't exist there either so 404 each time), while the original > requester waits and nginx keeps asking the backend the same. > > I'm using the nginx-1.4.1 from the debian squeeze repository. > > Is there a better way do to this config? The aim for for all web > servers to have the same config so a resource that aren't synced yet > still get served a response if it exists somewhere but without the > requests ending up in a circular loop. > > My current, hopefully not too cut down, config is: > > upstream imgweb_other { > server 192.168.129.90; > server 173.230.136.6 backup; > } > > server { > > proxy_read_timeout 15; > proxy_connect_timeout 3; > proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503 http_504 http_404; > > location ~ ^/img/(.*) > { > expires 2592000; > add_header Cache-Control public; > alias /var/www/live_site_resources/$1; > error_page 404 = @imgweb_other; > } > > location @imgweb_other { > # we only want to fallback once so use user_agent as a flag > if ( $http_user_agent = IMGWEB ) { > return 404; > } > proxy_pass http://imgweb_other; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header User-Agent IMGWEB; > } > > } > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- -- Daniel Black, Engineer @ Open Query (http://openquery.com) Remote expertise & maintenance for MySQL/MariaDB server environments. From mdounin at mdounin.ru Sat May 11 14:55:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 11 May 2013 18:55:58 +0400 Subject: proxy doesn't cache In-Reply-To: <518C65E6.8070400@gmx.net> References: <518C65E6.8070400@gmx.net> Message-ID: <20130511145558.GO69760@mdounin.ru> Hello! On Fri, May 10, 2013 at 11:13:42AM +0800, Stefan Xenon wrote: > Hi! > I want to use nginx as a caching proxy in front of an OCSP responder. > The OCSP requests are transmitted via HTTP POST. > > Hence, I configured nginx as follows: > > proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m > max_size=1000m inactive=600m; > server { > server_name localhost; > location / { > proxy_pass http://213.154.225.237:80; #ocsp.cacert.org > proxy_cache my-cache; > proxy_cache_methods POST; > proxy_cache_key "$scheme$proxy_host$uri$request_body"; > proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > } > ) > > I can access the OCSP responder through nginx and responses are received > as expected - no issue. The problem is that nginx doesn't cache the > responses. Note that OCSP nonces are *not* being sent as part of the > request. Using Wireshark and nginx' debug log, I verified that all my > requests are identical. How to configure nginx that it caches the responses? > > Note, I use the following command for testing: > openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url > http://localhost/ -serial You configuration doesn't contain proxy_cache_valid (see http://nginx.org/r/proxy_cache_valid), and in the same time via proxy_ignore_headers it ignores all headers which may be used to set response validity based on response headers. That is, no responses will be cached with the configuration above. You probably want to add something like proxy_cache_valid 200 1d; to your configuration. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat May 11 15:00:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 11 May 2013 19:00:51 +0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: References: <1368134044.19159.100.camel@steve-new> <20130510092618.GJ69760@mdounin.ru> Message-ID: <20130511150051.GP69760@mdounin.ru> Hello! On Fri, May 10, 2013 at 02:35:35PM -0400, Daniel Griscom wrote: > That's great information, but now I need to figure out how to > selectively proxy to my websocket backend when $http_upgrade is > "websocket". I see the following choices: > > 1) Have nginx listen at port 80, and proxy all traffic to port XXXX > if $http_upgrade is "websocket", or port YYYY if not. Then set up > nginx to handle http traffic at port YYYY, and have my websocket > backend handle websocket traffic at port XXXX. > > 2) Investigate the much-maligned "if" statement. > > > ... any other choices? Thoughts? I would recommend using URI-based distinction instead (and location{} blocks as a result). This would be most natural solution from nginx point of view. > > > Thanks, > Dan > > > At 1:26 PM +0400 5/10/13, Maxim Dounin wrote: > >Hello! > > > >On Fri, May 10, 2013 at 09:14:04AM +1200, Steve Holdoway wrote: > > > >> The scheme is available as... $scheme > > > >Yes, but WebSocket protocol uses http for handshake. So the > >scheme will be either "http" or "https". WebSocket requests can > >be identified based on Upgrade header, i.e. $http_upgrade > >variable. > > > >> > >> On Thu, 2013-05-09 at 14:45 -0400, Daniel Griscom wrote: > >> > ... bump? > >> > > >> > (thanks, > >> > Dan) > >> > > >> > > >> > At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: > >> > >I'm an nginx newbie, and need use use it as a front end for a > >> > >website that also handles websocket connections. I have the > >> > >configuration set up so that requests to a specific URI match a > >> > >location section, which then proxies the request to the websocket > >> > >back end server, and it all works. (Very cool.) > >> > > > >> > >However, I was wondering if, rather than detecting requests to a > >> > >specific location, I could proxy all "ws://" or "wss:// requests, > >> > >independent of the URI being requested. > >> > > > >> > >Is there a way to proxy all requests with a given protocol? > >> > > > >> > > > >> > >Thanks, > >> > >Dan > >> > > > >> > >-- > >> > >Daniel T. Griscom griscom at suitable.com > >> > >Suitable Systems http://www.suitable.com/ > >> > >1 Centre Street, Suite 204 (781) 665-0053 > >> > >Wakefield, MA 01880-2400 > >> > > > >> > >_______________________________________________ > >> > >nginx mailing list > >> > >nginx at nginx.org > >> > >http://mailman.nginx.org/mailman/listinfo/nginx > >> > > >> > > >> > >> -- > >> Steve Holdoway BSc(Hons) MNZCS > >> http://www.greengecko.co.nz > >> MSN: steve at greengecko.co.nz > >> Skype: sholdowa > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > >-- > >Maxim Dounin > >http://nginx.org/en/donation.html > > > >_______________________________________________ > >nginx mailing list > >nginx at nginx.org > >http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Daniel T. Griscom griscom at suitable.com > Suitable Systems http://www.suitable.com/ > 1 Centre Street, Suite 204 (781) 665-0053 > Wakefield, MA 01880-2400 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/en/donation.html From griscom at suitable.com Sat May 11 15:05:18 2013 From: griscom at suitable.com (Daniel Griscom) Date: Sat, 11 May 2013 11:05:18 -0400 Subject: Proxying based on protocol (e.g. "ws"/"wss")? In-Reply-To: <20130511150051.GP69760@mdounin.ru> References: <1368134044.19159.100.camel@steve-new> <20130510092618.GJ69760@mdounin.ru> <20130511150051.GP69760@mdounin.ru> Message-ID: Thanks. I've been coming to that conclusion myself. Take care, Dan At 7:00 PM +0400 5/11/13, Maxim Dounin wrote: >Hello! > >On Fri, May 10, 2013 at 02:35:35PM -0400, Daniel Griscom wrote: > >> That's great information, but now I need to figure out how to >> selectively proxy to my websocket backend when $http_upgrade is >> "websocket". I see the following choices: >> >> 1) Have nginx listen at port 80, and proxy all traffic to port XXXX >> if $http_upgrade is "websocket", or port YYYY if not. Then set up >> nginx to handle http traffic at port YYYY, and have my websocket >> backend handle websocket traffic at port XXXX. >> >> 2) Investigate the much-maligned "if" statement. >> >> >> ... any other choices? Thoughts? > >I would recommend using URI-based distinction instead (and >location{} blocks as a result). This would be most natural >solution from nginx point of view. > >> >> >> Thanks, >> Dan >> >> >> At 1:26 PM +0400 5/10/13, Maxim Dounin wrote: >> >Hello! >> > >> >On Fri, May 10, 2013 at 09:14:04AM +1200, Steve Holdoway wrote: >> > >> >> The scheme is available as... $scheme >> > >> >Yes, but WebSocket protocol uses http for handshake. So the >> >scheme will be either "http" or "https". WebSocket requests can >> >be identified based on Upgrade header, i.e. $http_upgrade >> >variable. >> > >> >> >> >> On Thu, 2013-05-09 at 14:45 -0400, Daniel Griscom wrote: >> >> > ... bump? >> >> > >> >> > (thanks, >> >> > Dan) >> >> > >> >> > >> >> > At 9:32 AM -0400 5/8/13, Daniel Griscom wrote: >> >> > >I'm an nginx newbie, and need use use it as a front end for a >> >> > >website that also handles websocket connections. I have the >> >> > >configuration set up so that requests to a specific URI match a >> >> > >location section, which then proxies the request to the websocket >> >> > >back end server, and it all works. (Very cool.) >> >> > > >> >> > >However, I was wondering if, rather than detecting requests to a >> >> > >specific location, I could proxy all "ws://" or "wss:// requests, >> >> > >independent of the URI being requested. >> >> > > >> >> > >Is there a way to proxy all requests with a given protocol? >> >> > > >> >> > > >> >> > >Thanks, >> >> > >Dan >> >> > > >> >> > >-- >> >> > >Daniel T. Griscom griscom at suitable.com >> >> > >Suitable Systems http://www.suitable.com/ >> >> > >1 Centre Street, Suite 204 (781) 665-0053 >> >> > >Wakefield, MA 01880-2400 >> >> > > >> >> > >_______________________________________________ >> >> > >nginx mailing list >> >> > >nginx at nginx.org >> >> > >http://mailman.nginx.org/mailman/listinfo/nginx >> >> > >> >> > >> >> >> >> -- >> >> Steve Holdoway BSc(Hons) MNZCS >> >> http://www.greengecko.co.nz >> >> MSN: steve at greengecko.co.nz >> >> Skype: sholdowa >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> >-- >> >Maxim Dounin >> >http://nginx.org/en/donation.html >> > >> >_______________________________________________ >> >nginx mailing list >> >nginx at nginx.org >> >http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> -- >> Daniel T. Griscom griscom at suitable.com >> Suitable Systems http://www.suitable.com/ >> 1 Centre Street, Suite 204 (781) 665-0053 >> Wakefield, MA 01880-2400 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > >-- >Maxim Dounin >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From cnst++ at FreeBSD.org Sat May 11 18:53:48 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Sat, 11 May 2013 11:53:48 -0700 Subject: is static access_log within if-in-location better than dynamic global? Message-ID: <518E93BC.6050004@FreeBSD.org> Hi, According to http://nginx.org/r/access_log, the access_log directive cannot be used within an `if` in `server`, only within an `if` in `location`. Indeed, it doesn't actually work within an `if` directly within `server`, but variables can be used to seemingly achieve identical result. Or is it? Use of variables is documented to result in repeated open/close of the log-files, potentially alleviated by a cache that is turned off by default. As such, if I only want to have three `access_log` files in total, and most of the requests in question are served from a single `location`, would I be much better off in using `access_log` within an "if in location" a couple of times, or using a single global `access_log` with a single variable that could have only a few possible values, through global `if`s, potentially with `open_log_file_cache`? I guess using `access_log` without variables within "if in location" would be much better, but I just want to confirm that it's the case. Best regards, Constantine. From sb at waeme.net Sat May 11 22:19:53 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Sun, 12 May 2013 02:19:53 +0400 Subject: Debian package In-Reply-To: References: Message-ID: On 11 May2013, at 07:52 , B.R. wrote: > Hello, > > Is Wheezy supported for Nginx? Not yet. I plan to build packages for wheezy and ubuntu 13.04 in a week or two. > > Would it be possible to use role-based package selection for the Nginx APT source? No, I see potential problems here: nginx binary depends on certain libraries versions, for example nginx for wheezy will be built with spdy module and thus will require openssl >= 1.0.1. But what will be happened when testing becames stable and stable - oldstable? As far as I understand it may result in someone will install wheezy package on squeeze and it will not start at all. > That means using: > deb http://nginx.org/packages/debian/ stable > nginx > deb-src > http://nginx.org/packages/debian/ stable nginx > Instead of: > deb http://nginx.org/packages/debian/ squeeze > nginx > deb-src > http://nginx.org/packages/debian/ squeeze nginx > Thanks, > --- > B. R. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From g.plumb at gmail.com Sat May 11 22:55:22 2013 From: g.plumb at gmail.com (Gee) Date: Sat, 11 May 2013 23:55:22 +0100 Subject: Nginx + Mono (OpenBSD 5.3) Message-ID: It's looking like he permissions issue may be related to chroot (OpenBSD appears to run nginx in 'jail'). This all seems sane - but unfortunately, my *nix-foo isn't strong enough to work out which is the best path for my socket (file) to live. Can anyone help me with this? Thanks! G -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat May 11 23:43:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 12 May 2013 03:43:00 +0400 Subject: nginx-1.4 proxy requests being continious In-Reply-To: <1521922801.5736.1368252818089.JavaMail.root@zimbra.lentz.com.au> References: <1211555988.5702.1368059934009.JavaMail.root@zimbra.lentz.com.au> <1521922801.5736.1368252818089.JavaMail.root@zimbra.lentz.com.au> Message-ID: <20130511234300.GT69760@mdounin.ru> Hello! On Sat, May 11, 2013 at 04:13:38PM +1000, Daniel Black wrote: [...] > > A request for /img/file_doesnt_exist.jpg results in the backend server > > (192.168.129.90) getting continuous requests for the same file (which > > doesn't exist there either so 404 each time), while the original > > requester waits and nginx keeps asking the backend the same. > > > > I'm using the nginx-1.4.1 from the debian squeeze repository. [...] > > server 173.230.136.6 backup; [...] > > proxy_next_upstream error timeout invalid_header http_500 http_502 > > http_503 http_504 http_404; What you describe looks very familiar - there was such a bug which manifested itself with backup servers and proxy_next_upstream http_404. It was fixed in 1.3.0/1.2.1 though: *) Bugfix: nginx might loop infinitely over backends if the "proxy_next_upstream" directive with the "http_404" parameter was used and there were backup servers specified in an upstream block. Are you sure you are using 1.4.1 on your frontend (note: it's usually not enough to check version of nginx binary on disk, as running nginx binary may be different)? Could you please provide frontend's debug log? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Sun May 12 01:53:47 2013 From: nginx-forum at nginx.us (dktp1) Date: Sat, 11 May 2013 21:53:47 -0400 Subject: Reading request_body before passing to backend server Message-ID: <0a9d6c7ca77cdb69c37c35162b45d63f.NginxMailingListEnglish@forum.nginx.org> I am trying to read if a specific value is set (passed) as a POST using request_body. What I am trying to do is: server { .. if ($request_body ~ "API-value-Iwanttocheck") { set $my_api "TRUE"; } And later on pass to the backend server: location / { proxy_pass http://backend:80; } However, that never seems to match. I even enabled logging and modified my log_format to have: " "POSTREQUEST:$request_body" " And I can see in the log when I don't have the if statement. When I add the if statement, it doesn't show up in the logs any more. Anyone have any idea? Or a simple way to check if a value set via POST is actually there? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239091,239091#msg-239091 From daniel.black at openquery.com Sun May 12 02:46:41 2013 From: daniel.black at openquery.com (Daniel Black) Date: Sun, 12 May 2013 12:46:41 +1000 (EST) Subject: nginx-1.4 proxy requests being continious In-Reply-To: <20130511234300.GT69760@mdounin.ru> Message-ID: <1444571869.5750.1368326801911.JavaMail.root@zimbra.lentz.com.au> Hi! > > > proxy_next_upstream error timeout invalid_header http_500 http_502 > > > http_503 http_504 http_404; > > What you describe looks very familiar - there was such a bug which > manifested itself with backup servers and proxy_next_upstream > http_404. It was fixed in 1.3.0/1.2.1 though: > > > *) Bugfix: nginx might loop infinitely over backends if the > "proxy_next_upstream" directive with the "http_404" parameter was > used and there were backup servers specified in an upstream block. > > Are you sure you are using 1.4.1 on your frontend (note: it's > usually not enough to check version of nginx binary on disk, as > running nginx binary may be different)? Could you please provide > frontend's debug log? Quite right. I did update to 1.4.1 just afterwards. 2013-05-08 20:16:29 upgrade nginx 0.7.67-3+squeeze3 1.4.1-1~squeeze I definitely restarted the nginx-1.4.1 with no remnants of 0.7.67 around and haven't had the troubles when I re-tested. Thanks for the fix Maxim and digging up this changelog entry. Looking forward to putting it into production in the next few hours. Any troubles and I will grab a debug log for you. -- Daniel Black From lists-nginx at swsystem.co.uk Sun May 12 14:25:56 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 12 May 2013 15:25:56 +0100 Subject: location matching. Message-ID: <518FA674.1000903@swsystem.co.uk> I've just had to move subversion onto a server that's already serving network wordpress via nginx. Most things work via /svn in a subversion client but I can't for the life of me figure out how to stop /svn.*\.php hitting the fastcgi_pass. I'm sure it's simple and I'm just not seeing the wood for the trees. Here's the nginx access_log entry: 1.1.1.1 - - [12/May/2013:14:16:01 +0000] "OPTIONS /svn/live/s.php HTTP/1.1" 404 47 "-" "SVN/1.7.8/TortoiseSVN-1.7.11.23600 neon/0.29.6" server block: server { listen 80 default; ## listen for ipv4 listen 443 default ssl; ## listen for ipv4 listen [::]:80 default; ## listen for ipv6 listen [::]:443 default ssl; ## listen for ipv6 server_name abc.xyz.com; server_name def.xyz.com; root /srv/sites/docroot; ssl_certificate /etc/ssl/STAR.xyz.com.pem; ssl_certificate_key /etc/ssl/STAR.xyz.com.pem; index index.php index.php5 index.html index.htm; client_max_body_size 300M; location ~ /svn { proxy_pass http://apache; proxy_set_header X-Real-IP $remote_addr; } location ~ \.(php|php5) { fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php/wp-sites; } # multi site rule location / { try_files $uri $uri/ /index.php?$args; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } rewrite /files/$ /index.php last; set $cachetest "$document_root/wp-content/cache/ms-filemap/${host}${uri}"; if ($uri ~ /$) { set $cachetest ""; } if (-f $cachetest) { rewrite ^ /wp-content/cache/ms-filemap/${host}${uri} break; } if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } } nginx -v: nginx version: nginx/1.4.1 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-cache-purge --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/ngx_http_pinba_module --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/src/nginx/source/nginx-1.4.1/debian/modules/nginx-x-rid-header --with-ld-opt=-lossp-uuid Regards Steve. From contact at jpluscplusm.com Sun May 12 14:55:38 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 12 May 2013 15:55:38 +0100 Subject: location matching. In-Reply-To: <518FA674.1000903@swsystem.co.uk> References: <518FA674.1000903@swsystem.co.uk> Message-ID: Have you looked at the ^~ prefix mentioned in http://wiki.nginx.org/HttpCoreModule#location ? It looks like what you need ... -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From lists-nginx at swsystem.co.uk Sun May 12 15:21:14 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 12 May 2013 16:21:14 +0100 Subject: location matching. In-Reply-To: References: <518FA674.1000903@swsystem.co.uk> Message-ID: <518FB36A.3050209@swsystem.co.uk> On 12/05/2013 15:55, Jonathan Matthews wrote: > Have you looked at the ^~ prefix mentioned in > http://wiki.nginx.org/HttpCoreModule#location ? > > It looks like what you need ... I thought I'd tried that, and even with the change in config it's still giving me the 404 errors. changed config: location ^~ /svn { proxy_pass http://apache; proxy_set_header X-Real-IP $remote_addr; } location ~* \.(php|php5) { ... } Steve. From contact at jpluscplusm.com Sun May 12 15:34:07 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 12 May 2013 16:34:07 +0100 Subject: location matching. In-Reply-To: <518FB36A.3050209@swsystem.co.uk> References: <518FA674.1000903@swsystem.co.uk> <518FB36A.3050209@swsystem.co.uk> Message-ID: On 12 May 2013 16:21, Steve Wilson wrote: > On 12/05/2013 15:55, Jonathan Matthews wrote: >> Have you looked at the ^~ prefix mentioned in >> http://wiki.nginx.org/HttpCoreModule#location ? >> >> It looks like what you need ... > > I thought I'd tried that, and even with the change in config it's still > giving me the 404 errors. It might just be some unintended rewrite you're doing at the server level. Those don't look very nice, IMHO. May I strongly suggest that you separate SVN access out by a different host header? Then you won't have to deal with an increasingly complex single server{} where this sort of thing might happen when you make the slightest change in the future ... Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From lists-nginx at swsystem.co.uk Sun May 12 15:54:40 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 12 May 2013 16:54:40 +0100 Subject: location matching. In-Reply-To: References: <518FA674.1000903@swsystem.co.uk> <518FB36A.3050209@swsystem.co.uk> Message-ID: <518FBB40.3080707@swsystem.co.uk> On 12/05/2013 16:34, Jonathan Matthews wrote: > On 12 May 2013 16:21, Steve Wilson wrote: >> On 12/05/2013 15:55, Jonathan Matthews wrote: >>> Have you looked at the ^~ prefix mentioned in >>> http://wiki.nginx.org/HttpCoreModule#location ? >>> >>> It looks like what you need ... >> >> I thought I'd tried that, and even with the change in config it's still >> giving me the 404 errors. > > It might just be some unintended rewrite you're doing at the server > level. Those don't look very nice, IMHO. > > May I strongly suggest that you separate SVN access out by a different > host header? Then you won't have to deal with an increasingly complex > single server{} where this sort of thing might happen when you make > the slightest change in the future ... That's my last option. Unfortunately I've had to move svn from a dedicated server, I'm not sure if svn clients support SNI but I've only 1 IP address available on the current machine and it's already doing ssl which is why it's mixed into the wordpress junk. Steve. From lists-nginx at swsystem.co.uk Sun May 12 16:28:00 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 12 May 2013 17:28:00 +0100 Subject: location matching. In-Reply-To: <518FBB40.3080707@swsystem.co.uk> References: <518FA674.1000903@swsystem.co.uk> <518FB36A.3050209@swsystem.co.uk> <518FBB40.3080707@swsystem.co.uk> Message-ID: <518FC310.3090005@swsystem.co.uk> The good news is it looks like subversion clients support SNI so I've got a new server block which is now working great. server { listen 80; ## listen for ipv4 listen 443; ## listen for ipv4 listen [::]:80; ## listen for ipv6 listen [::]:443; ## listen for ipv6 server_name svn.xyz.com; ssl_certificate /etc/ssl/STAR.xyz.com.pem; ssl_certificate_key /etc/ssl/STAR.xyz.com.pem; client_max_body_size 300M; access_log /var/log/nginx/svn_access.log; location ^~ /svn { proxy_pass http://apache; proxy_set_header X-Real-IP $remote_addr; } location / { return 200 "There's nothing at $scheme://$host$uri"; } } Thanks for the help. Steve. From nginx-forum at nginx.us Mon May 13 05:56:53 2013 From: nginx-forum at nginx.us (dcreatorx) Date: Mon, 13 May 2013 01:56:53 -0400 Subject: JavaScript and CSS minifiers questions Message-ID: Hi, I'm trying to set up JavasCript and CSS minify via the CPAN modules. I'm using debian 6 64 bit squeeze. I have compiled Nginx with perl module, installed all build dependencies before compiling, downloaded CPAN JS minify and CSS minify, compiled and installed them. When I run Nginx in debug mode, there seems to be no noticeable error on the perl module side in the logs. But there is a problem : When I run Nginx with those two modules enabled, the CSS and JavasCript on the site disappear. Looks like if the perl scripts worked but the server doesn't serve the compressed & cached version back. The only problem I'm being able to see is that when I do execute perl /usr/local/nginx/perl/Minify.pm The following output comes out : Can't load '/usr/local/lib/perl/5.10.1/auto/nginx/nginx.so' for module nginx: /usr/local/lib/perl/5.10.1/auto/nginx/nginx.so: undefined symbol: ngx_http_core_module at /usr/lib/perl/5.10/XSLoader.pm line 70. at /usr/local/lib/perl/5.10.1/nginx.pm line 56 Compilation failed in require at Minify.pm line 2. BEGIN failed--compilation aborted at Minify.pm line 2. I am not sure anymore what could be wrong, any help with this will be much appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239105,239105#msg-239105 From nginx-forum at nginx.us Mon May 13 06:22:14 2013 From: nginx-forum at nginx.us (Larry) Date: Mon, 13 May 2013 02:22:14 -0400 Subject: Nginx -> strategy ? Message-ID: <7ae6a3d0561ec6756eef2afdb72b91de.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to know how nginx could deal with this situation the most comfortable way to serve static files : 1) Is tree traversing fast on xfs/ext4 filesystems ? aaa/bbb/ccc number of files inside the last subfolder is approx 2000. 2) Will nginx prefer another strategy ? It is a bit a out-of-any-other-factors-involved question. Many thanks ! Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239106,239106#msg-239106 From nginx-forum at nginx.us Mon May 13 06:37:03 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 13 May 2013 02:37:03 -0400 Subject: Nginx -> strategy ? In-Reply-To: <7ae6a3d0561ec6756eef2afdb72b91de.NginxMailingListEnglish@forum.nginx.org> References: <7ae6a3d0561ec6756eef2afdb72b91de.NginxMailingListEnglish@forum.nginx.org> Message-ID: hi, i faced a similar question for a client with a lot of files and found out, after a lot of testing and benchmarking, it nearly doesnt matter if we serve those files from a ram-tmpfs or use the systems os-cache (given, we have plenty of it). so the os (linux) seems to do a good job. the servers were designed to serve static files only. YMMV regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239106,239107#msg-239107 From mdounin at mdounin.ru Mon May 13 07:09:16 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 May 2013 11:09:16 +0400 Subject: Reading request_body before passing to backend server In-Reply-To: <0a9d6c7ca77cdb69c37c35162b45d63f.NginxMailingListEnglish@forum.nginx.org> References: <0a9d6c7ca77cdb69c37c35162b45d63f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130513070916.GX69760@mdounin.ru> Hello! On Sat, May 11, 2013 at 09:53:47PM -0400, dktp1 wrote: > I am trying to read if a specific value is set (passed) as a POST using > request_body. > > What I am trying to do is: > > server { > .. > if ($request_body ~ "API-value-Iwanttocheck") { > set $my_api "TRUE"; > } > > And later on pass to the backend server: > > location / { > proxy_pass http://backend:80; > } > > > However, that never seems to match. I even enabled logging and modified my > log_format to have: > > " > "POSTREQUEST:$request_body" > " > > And I can see in the log when I don't have the if statement. When I add the > if statement, it doesn't show up in the logs any more. > > Anyone have any idea? Or a simple way to check if a value set via POST is > actually there? The $request_body variable isn't available in rewrites as request body isn't yet read at this phase. Request body can be read and examined using embedded perl module, see here: http://nginx.org/en/docs/http/ngx_http_perl_module.html -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 13 10:56:11 2013 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 13 May 2013 06:56:11 -0400 Subject: [1.4.1] Finding docroot directory? Message-ID: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> Hello I'm running Nginx 1.4.1 on an appliance running Debian 6, and can't find from which directory Nginx is serving files. According to /etc/nginx.conf, the location for "/" is "root /var/www;", but even renaming the default index.html still displays the familiar "Welcome to nginx!". After using "apt-get upgrade" to upgrade from 1.2.? to 1.4.1, I ran "/etc/init.d/nginx restart" just in case, but it made no difference. Is there a way to have Nginx tell from where it's serving files? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239114,239114#msg-239114 From nginx-forum at nginx.us Mon May 13 11:02:14 2013 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 13 May 2013 07:02:14 -0400 Subject: [1.4.1] Finding docroot directory? In-Reply-To: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> References: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6b62994161ba829b6d78a948a92855a4.NginxMailingListEnglish@forum.nginx.org> I found a work-around: Reading "/var/log/nginx/error.log" includes a warning that Nginx can't find "/usr/share/nginx/html/favicon.ico", so for some reason, Nginx uses /usr/share/nginx/html/ instead of /var/www. I'm confused about the multiple configuration files used by Nginx: /etc/nginx.conf /etc/conf.d/ /etc/sites-available/ /etc/sites-enabled/ Why are there more than one? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239114,239115#msg-239115 From paulnpace at gmail.com Mon May 13 11:06:35 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Mon, 13 May 2013 04:06:35 -0700 Subject: [1.4.1] Finding docroot directory? In-Reply-To: <6b62994161ba829b6d78a948a92855a4.NginxMailingListEnglish@forum.nginx.org> References: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> <6b62994161ba829b6d78a948a92855a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: This Ars Technica article is where I learned how to use nginx. Lee Hutchinson does a good job explaining all that. http://arstechnica.com/gadgets/2012/11/how-to-set-up-a-safe-and-secure-web-server/ On Mon, May 13, 2013 at 4:02 AM, Shohreh wrote: > I found a work-around: Reading "/var/log/nginx/error.log" includes a warning > that Nginx can't find "/usr/share/nginx/html/favicon.ico", so for some > reason, Nginx uses /usr/share/nginx/html/ instead of /var/www. > > I'm confused about the multiple configuration files used by Nginx: > > /etc/nginx.conf > /etc/conf.d/ > /etc/sites-available/ > /etc/sites-enabled/ > > Why are there more than one? > > Thank you. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239114,239115#msg-239115 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon May 13 11:07:31 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2013 12:07:31 +0100 Subject: location matching. In-Reply-To: <518FA674.1000903@swsystem.co.uk> References: <518FA674.1000903@swsystem.co.uk> Message-ID: <20130513110731.GW27406@craic.sysops.org> On Sun, May 12, 2013 at 03:25:56PM +0100, Steve Wilson wrote: Hi there, I see from the end of the thread that you have a working system now by using separate server{} blocks, so that's all good. It might be interesting to see why the original did not do what you wanted, and to see whether there is a reasonable way of not using two server{} blocks. Feel free to ignore this if it's not interesting to you :-) > I've just had to move subversion onto a server that's already serving > network wordpress via nginx. Most things work via /svn in a subversion > client but I can't for the life of me figure out how to stop /svn.*\.php > hitting the fastcgi_pass. The incoming request is for "/svn/live/s.php". The only location{} blocks are > location ~ /svn { > location ~ \.(php|php5) { > location / { > location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { and, of those, this request should be handled in the first one. But you report that it is actually handled in the second. As well as the location{} blocks, there are server-level rewrites and if() blocks, which will take effect before the location-matching. > rewrite /wp-admin$ $scheme://$host$uri/ permanent; That one doesn't apply here. > rewrite /files/$ /index.php last; Neither does that one. > set $cachetest > "$document_root/wp-content/cache/ms-filemap/${host}${uri}"; > if (-f $cachetest) { That most likely doesn't, but it can't be guaranteed. > if ($uri !~ wp-content/plugins) { That "if" does apply, but the following rewrite doesn't. > if (!-e $request_filename) { That "if" most likely does apply, so the next rewrites are tried... > rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; > rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; > rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; and the third one there matches, so there's an internal rewrite to "/live/s.php", and the location match actually starts with that uri. Which matches the second location{} block above, and is consistent with what you report. It looks like your "wordpress" nginx config (which is similar to what is currently on both http://wiki.nginx.org/WordPress and http://codex.wordpress.org/Nginx) assumes that wordpress is the only purpose of this nginx server. It would probably take quite a bit of testing to safely get rid of the assumptions made in the server-level "if" and "rewrite" directives used. It looks like some could probably be replaced with location{} blocks, but "some" isn't "all". So, unless someone has the time and inclination to re-do and test that to allow one server{} block share wordpress and anything else, it looks like the simple change is to use a separate server{} block for wordpress. Which is what you did. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 13 11:18:44 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2013 12:18:44 +0100 Subject: [1.4.1] Finding docroot directory? In-Reply-To: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> References: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130513111844.GX27406@craic.sysops.org> On Mon, May 13, 2013 at 06:56:11AM -0400, Shohreh wrote: Hi there, > I'm running Nginx 1.4.1 on an appliance running Debian 6, and can't find > from which directory Nginx is serving files. If "nginx -V" doesn't tell you, and the log files don't tell you, then you can do something like location = /docroot-test { return 200 "docroot is $document_root\n"; } followed by curl -i http://localhost/docroot-test You'll need to make sure that the new location{} is in the server{} that will handle this request, of course, but it's hard to argue with asking the server itself what directory it is looking in. Any *other* location{} block might have a different "root" or "alias" defined, but that should be clear from the config file. f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 13 11:24:45 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2013 12:24:45 +0100 Subject: [1.4.1] Finding docroot directory? In-Reply-To: <6b62994161ba829b6d78a948a92855a4.NginxMailingListEnglish@forum.nginx.org> References: <56b462008775d41d5864c945349de9c8.NginxMailingListEnglish@forum.nginx.org> <6b62994161ba829b6d78a948a92855a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130513112445.GY27406@craic.sysops.org> On Mon, May 13, 2013 at 07:02:14AM -0400, Shohreh wrote: Hi there, > I'm confused about the multiple configuration files used by Nginx: > > /etc/nginx.conf > /etc/conf.d/ > /etc/sites-available/ > /etc/sites-enabled/ > > Why are there more than one? nginx uses exactly one -- the one named in the "-c" argument when run, or else the compiled-in value. That one file may use the "include" directive to read in other files, if the administrator thinks that this is more useful to them. And if you're not running the equivalent of "/usr/local/nginx/sbin/nginx", then whatever startup sequence you are running might introduce its own "-c" argument. That's up to the administrator, and not nginx. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Mon May 13 11:32:46 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 May 2013 15:32:46 +0400 Subject: nginx-1.2.9 Message-ID: <20130513113245.GI69760@mdounin.ru> Changes with nginx 1.2.9 13 May 2013 *) Security: contents of worker process memory might be sent to a client if HTTP backend returned specially crafted response (CVE-2013-2070); the bug had appeared in 1.1.4. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon May 13 11:33:15 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 May 2013 15:33:15 +0400 Subject: nginx security advisory (CVE-2013-2070) Message-ID: <20130513113315.GM69760@mdounin.ru> Hello! A security problem related to CVE-2013-2028 was identified, affecting some previous nginx versions if proxy_pass to untrusted upstream HTTP servers is used. The problem may lead to a denial of service or a disclosure of a worker process memory on a specially crafted response from an upstream proxied server. The problem affects nginx 1.1.4 - 1.2.8, 1.3.0 - 1.4.0. The problem is already fixed in nginx 1.5.0, 1.4.1. Version 1.2.9 was released to address the issue in the 1.2.x legacy branch. Patch for nginx 1.3.9 - 1.4.0 is the same as for CVE-2013-2028: http://nginx.org/download/patch.2013.chunked.txt Patch for older nginx versions (1.1.4 - 1.2.8, 1.3.0 - 1.3.8) can be found here: http://nginx.org/download/patch.2013.proxy.txt -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 13 11:41:54 2013 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 13 May 2013 07:41:54 -0400 Subject: [1.4.1] Finding docroot directory? In-Reply-To: <20130513112445.GY27406@craic.sysops.org> References: <20130513112445.GY27406@craic.sysops.org> Message-ID: Thanks everyone for the infos. "nginx -V" doesn't say where the docroot is, but I noticed that /etc/nginx/nginx.conf does use "/etc/nginx/conf.d/*.conf", where default.conf says: [code] location / { root /usr/share/nginx/html [/code] Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239114,239131#msg-239131 From nginx-forum at nginx.us Mon May 13 13:06:44 2013 From: nginx-forum at nginx.us (krukow) Date: Mon, 13 May 2013 09:06:44 -0400 Subject: Dynamic upstream configuration In-Reply-To: References: Message-ID: mamoos1 Wrote: ------------------------------------------------------- [snip...] > I know that I can configure keep-alive with upstream - but that > requires me to know upfront which servers will be used (which I don't, > since I dynamically fetch content according to headers). > Is there a way to configure that the proxy will keep-alive connections > with backend servers dynamically, or for some time? I'm facing a similar issue right now where I need to enable persistent/keepalive connections to a dynamic set of upstreams. This is not running SSL, but for various reasons upstream servers are slow at establishing connections and persistent connections are a real performance boost. We dynamically find the upstream using lua scripting. What I'd like is to dynamically define an upstream including keepalive (ideally in lua). Has anyone found a way to achieve this, or is it simple not possible at the moment? - Karl Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238424,239142#msg-239142 From nginx-forum at nginx.us Mon May 13 13:13:02 2013 From: nginx-forum at nginx.us (mamoos1) Date: Mon, 13 May 2013 09:13:02 -0400 Subject: Dynamic upstream configuration In-Reply-To: References: Message-ID: <52f31ee2fa8593afff287b86c9f0f86c.NginxMailingListEnglish@forum.nginx.org> Hi! Unfortunately, I didn't find a very good solution for this, but, in nginx 1.4+ the issue with SSL backend keepalive was fixed - so if you have a list of all your possible servers, your configuration file will look horrible, but you can achieve this by having all the backend servers defined as upstreams with different "server" clauses for each of the proxies. Cheers, - Roy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238424,239143#msg-239143 From nginx-forum at nginx.us Mon May 13 16:26:32 2013 From: nginx-forum at nginx.us (guillaumeserton) Date: Mon, 13 May 2013 12:26:32 -0400 Subject: Multiple site configuration Message-ID: <3c23c3c57509c7ca73faefec33acf2a8.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to set nginx to use several website, and that they are reachable for some of them with a domain name or in localhost. Example: Site1: only accessible on localhost with 192.168.x.x/site1 (root: /var/www/site1) Site2: accessible accessible on site2.eu domain and also on localhost with 192.168.x.x/site2 (root: /var/www/site2) Site3: accessible on the subdomain sub.site2.eu and also on localhost with 192.168.x.x/subsite2 (root: /var/www/subsite2) I have set two sites (site2 and site3) on my configuration and reachable with domain and subdomain. But i can't reach them locallly with the ipaddress/nameofsite. I have only one site accessible directly under my local ip address. So, currently i have: Site2 (var/www/site2: reachable with his domain site2.eu or www.site2.eu and also on the local ip 192.168.x.x SIte3 (var/www/site3: reachable with the domain sub.site2.eu. Otherwise i can't reach him locally. I would like to reach him with 192.168.x.x/site3 Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239147,239147#msg-239147 From kworthington at gmail.com Mon May 13 16:56:57 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 13 May 2013 12:56:57 -0400 Subject: [nginx-announce] nginx-1.2.9 In-Reply-To: <20130513113251.GJ69760@mdounin.ru> References: <20130513113251.GJ69760@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.X.X for Windows http://goo.gl/rNjpl (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Mon, May 13, 2013 at 7:32 AM, Maxim Dounin wrote: > Changes with nginx 1.2.9 13 May > 2013 > > *) Security: contents of worker process memory might be sent to a > client > if HTTP backend returned specially crafted response (CVE-2013-2070); > the bug had appeared in 1.1.4. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon May 13 16:58:48 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 13 May 2013 12:58:48 -0400 Subject: Multiple site configuration In-Reply-To: <3c23c3c57509c7ca73faefec33acf2a8.NginxMailingListEnglish@forum.nginx.org> References: <3c23c3c57509c7ca73faefec33acf2a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: The problem I see is you try to address the process of requests differently whether from inside your network or from the outside. You may already solve the problem of your outside interface easily with server names (knowing how nginx processes a request ), listening for 'localhost', 'site2.eu' and 'sub.site.eu'. What's uncommon is your way of addressing local websites, all locations (/site2, /subsite2) inside a unique server listening on a local IP address. That breaks the paradigm and thus becomes incompatible with the previous work. The 'Mixed name-based and IP-based server names' section of the How Nginx processes a request document provides you with intelligence on how things may be done: server { listen 192.168.x.x; # Default port is 80 when not specified # Not sure whether or not you can use CIDR notation here, such as 192.168/16; # No 'server_name' directive here, default is empty, which means all request for this IP on port 80 without another server matching the server_name will end up here # Content of site 1 here } server { listen 80; # Default listens on all addresses/interfaces when only a port is specified server_name site2.eu; # Use location redirection to remove the www in host, see http://forum.nginx.org/read.php?2,238910,238911#msg-238911 # Content of site 2 here } server { listen 80; server_name sub.site2.eu; # Content of site 3 here } site2.eu and sub.site2.eu will be accessible from inside/outside your network using the same domain name. site 1 will only be accessible from the 192.168.x.x address which shouldn't be routed from outside your network, thus making it locally accessible only. You may add whatever server name you wish to it, it will stick to that network. Another addendum: document on how Nginx choose its default server for a particular request in How nginx processes a requestto learn how nginx will address special cases. Hope I helped. --- *B. R.* On Mon, May 13, 2013 at 12:26 PM, guillaumeserton wrote: > Hello, > > I would like to set nginx to use several website, and that they are > reachable for some of them with a domain name or in localhost. > > Example: > Site1: only accessible on localhost with 192.168.x.x/site1 (root: > /var/www/site1) > Site2: accessible accessible on site2.eu domain and also on localhost with > 192.168.x.x/site2 (root: /var/www/site2) > Site3: accessible on the subdomain sub.site2.eu and also on localhost with > 192.168.x.x/subsite2 (root: /var/www/subsite2) > > I have set two sites (site2 and site3) on my configuration and reachable > with domain and subdomain. But i can't reach them locallly with the > ipaddress/nameofsite. I have only one site accessible directly under my > local ip address. > > So, currently i have: > > Site2 (var/www/site2: reachable with his domain site2.eu or www.site2.euand > also on the local ip 192.168.x.x > SIte3 (var/www/site3: reachable with the domain sub.site2.eu. Otherwise i > can't reach him locally. I would like to reach him with 192.168.x.x/site3 > > > Thanks for your help. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239147,239147#msg-239147 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon May 13 19:46:27 2013 From: agentzh at gmail.com (agentzh) Date: Mon, 13 May 2013 12:46:27 -0700 Subject: [ANN] ngx_openresty stable version 1.2.7.8 and devel version 1.2.8.3 Message-ID: Hello! I've just released ngx_openresty stable version 1.2.7.8 and devel version 1.2.8.3 which both just contain the official patch for the nginx security advisory CVE-2013-2070: http://openresty.org/#Download Best regards, -agentzh From aldernetwork at gmail.com Mon May 13 21:35:00 2013 From: aldernetwork at gmail.com (Alder Network) Date: Mon, 13 May 2013 14:35:00 -0700 Subject: async notification in Nginx In-Reply-To: References: Message-ID: By browsing the source code, it seems ngx_event_t can be used for this purpose, but looks like ngx_event_t has to be associated with ngx_connection_t, does ngx_connection_t have to be a socket, or a regular file descriptor will do? Is there any forum for wrting nginx modules? On Fri, May 10, 2013 at 6:37 PM, Alder Network wrote: > I am writing a Niginx module that from several of worker threads > that I created by own, I need to notify Nginx's main thread so > the main thread (single-threaded nginx) doesn't need to poll > periodically in timer context. What would be the reliable way to > do that in Nginx? Anybody advice would be appreciated. > > - Alder > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 13 21:50:14 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2013 22:50:14 +0100 Subject: Multiple site configuration In-Reply-To: <3c23c3c57509c7ca73faefec33acf2a8.NginxMailingListEnglish@forum.nginx.org> References: <3c23c3c57509c7ca73faefec33acf2a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130513215014.GZ27406@craic.sysops.org> On Mon, May 13, 2013 at 12:26:32PM -0400, guillaumeserton wrote: Hi there, > I would like to set nginx to use several website, and that they are > reachable for some of them with a domain name or in localhost. Based on the connected ip:port and the provided Host: header, nginx will choose one server{} block to handle one request. You can nominate the "server_name" to be handled by each server{}, and you can nominate one default server{} per ip:port. I'm going to assume that when you say "on localhost with 192.168.x.x/site1", that that is equivalent to "using any not-otherwise-specified domain name". If you really mean "only when using the IP address", that can be added separately. What you want can be done; but you will need to make sure that every relative link within the content makes sense both when it is relative to "/site1/" and to "/". Essentially, this means that no relative link starts with "/", but instead uses the correct number of "../" segments if appropriate. Without that, you will find that lots of linked things do not work. You will configure: one server{} for each of the individual domain names; one (default) server{} with multiple location{}s, for each of the individual domain names. > Example: > Site1: only accessible on localhost with 192.168.x.x/site1 (root: > /var/www/site1) server { # this is the default server location = /site1 { return 301 /site1/; } location ^~ /site1/ { alias /var/www/site1/ # or # root /var/www; } } > Site2: accessible accessible on site2.eu domain and also on localhost with > 192.168.x.x/site2 (root: /var/www/site2) server { server_name site2.eu; root /var/www/site2; } and also, add to the earlier default server block: location = /site2 { return 301 /site2/; } location ^~ /site2/ { alias /var/www/site2/ } > Site3: accessible on the subdomain sub.site2.eu and also on localhost with > 192.168.x.x/subsite2 (root: /var/www/subsite2) server { server_name sub.site2.eu; root /var/www/subsite2; } and two extra location{}s in the default server block. f -- Francis Daly francis at daoine.org From matt.starcrest at yahoo.com Tue May 14 08:50:07 2013 From: matt.starcrest at yahoo.com (Matt Starcrest) Date: Tue, 14 May 2013 01:50:07 -0700 (PDT) Subject: hey Message-ID: <1368521407.19494.YahooMailNeo@web163506.mail.gq1.yahoo.com> http://texturepanel.com/news_xml.php?xskqducca792zezkvdg matt.starcrest Matt Starcrest ======================= Hold on - wait, maybe the answer's looking for you. % -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 14 09:38:06 2013 From: nginx-forum at nginx.us (peku33) Date: Tue, 14 May 2013 05:38:06 -0400 Subject: Fancyindex module hangs connection Message-ID: Hi everybody. I'm facing a problem with my nginx 1.4.0 server bulit from portage on gentoo (fresh instllation, amd64). After configuring one location to use fancyindex on; directory listing loads to footer and then connection freezes for about 1 minute. In this time no data is received, but browser shows that the page is still being loaded. After one minute, server finally sends fancyindex footer and connection closes. Here is my nginx.conf: http://pastebin.com/cXFWrQug Here is my security.conf: http://pastebin.com/4JEz1f35 And here is how the transfer goes: http://pastebin.com/FqrA1sZB Here are the flags I used for building: http://pastebin.com/dEm4j6VJ While using autoindex instead of fancyindex everything works correct. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239168,239168#msg-239168 From yaoweibin at gmail.com Tue May 14 09:40:36 2013 From: yaoweibin at gmail.com (Weibin Yao) Date: Tue, 14 May 2013 17:40:36 +0800 Subject: [ANNOUNCE] Tengine-1.4.6 (fixed CVE-2013-2070) Message-ID: Hi folks, Tengine-1.4.6 (development version) has been released. You can either checkout the source code from github: https://github.com/alibaba/tengine or download the tar ball directly: http://tengine.taobao.org/download/tengine-1.4.6.tar.gz We have merged the changes from nginx-1.2.9 which fixed the security problem CVE-2013-2070. Contents of worker process memory might be disclosed if HTTP backend server returned specially crafted response. This could cause denial of service or a disclosure of memory. If you are using Tengine-1.4.x and proxy_pass to untrusted upstream HTTP servers, please upgrade to this version as soon as possible! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at geistert.info Tue May 14 11:27:47 2013 From: david at geistert.info (David Geistert) Date: Tue, 14 May 2013 13:27:47 +0200 Subject: Debian 7 Message-ID: <51921FB3.3080906@geistert.info> Hey, I only want to ask, when the Debian Wheezy package will be released in http://nginx.org/packages/debian/ Best Regards David From black.fledermaus at arcor.de Tue May 14 11:57:37 2013 From: black.fledermaus at arcor.de (basti) Date: Tue, 14 May 2013 13:57:37 +0200 Subject: Debian 7 In-Reply-To: <51921FB3.3080906@geistert.info> References: <51921FB3.3080906@geistert.info> Message-ID: <519226B1.2080105@arcor.de> Have a look at http://nginx.org/packages/debian/pool/nginx/n/nginx/ Try squeeze files or build your own. All needed packages for build are in the link above. best regards Basti Am 14.05.2013 13:27, schrieb David Geistert: > Hey, > I only want to ask, when the Debian Wheezy package will be released in > http://nginx.org/packages/debian/ > > Best Regards > David > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From sb at waeme.net Tue May 14 12:29:12 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Tue, 14 May 2013 16:29:12 +0400 Subject: Debian 7 In-Reply-To: <51921FB3.3080906@geistert.info> References: <51921FB3.3080906@geistert.info> Message-ID: <6D9E8FEE-3017-4AD7-9EE8-C750E4BB5E99@waeme.net> On 14 May2013, at 15:27 , David Geistert wrote: > Hey, > I only want to ask, when the Debian Wheezy package will be released in http://nginx.org/packages/debian/ http://mailman.nginx.org/pipermail/nginx/2013-May/038908.html From stefanxe at gmx.net Tue May 14 12:51:48 2013 From: stefanxe at gmx.net (Stefan Xenon) Date: Tue, 14 May 2013 20:51:48 +0800 Subject: proxy doesn't cache In-Reply-To: <20130511145558.GO69760@mdounin.ru> References: <518C65E6.8070400@gmx.net> <20130511145558.GO69760@mdounin.ru> Message-ID: <51923364.5030800@gmx.net> Thanks a lot Maxim. This really solved my problem. :-) Stefan Am 11.05.2013 22:55, schrieb Maxim Dounin: > Hello! > > On Fri, May 10, 2013 at 11:13:42AM +0800, Stefan Xenon wrote: > >> Hi! >> I want to use nginx as a caching proxy in front of an OCSP responder. >> The OCSP requests are transmitted via HTTP POST. >> >> Hence, I configured nginx as follows: >> >> proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m >> max_size=1000m inactive=600m; >> server { >> server_name localhost; >> location / { >> proxy_pass http://213.154.225.237:80; #ocsp.cacert.org >> proxy_cache my-cache; >> proxy_cache_methods POST; >> proxy_cache_key "$scheme$proxy_host$uri$request_body"; >> proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> } >> ) >> >> I can access the OCSP responder through nginx and responses are received >> as expected - no issue. The problem is that nginx doesn't cache the >> responses. Note that OCSP nonces are *not* being sent as part of the >> request. Using Wireshark and nginx' debug log, I verified that all my >> requests are identical. How to configure nginx that it caches the responses? >> >> Note, I use the following command for testing: >> openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url >> http://localhost/ -serial > > You configuration doesn't contain proxy_cache_valid (see > http://nginx.org/r/proxy_cache_valid), and in the same time via > proxy_ignore_headers it ignores all headers which may be used to > set response validity based on response headers. That is, no > responses will be cached with the configuration above. > > You probably want to add something like > > proxy_cache_valid 200 1d; > > to your configuration. > From nginx-forum at nginx.us Tue May 14 13:34:01 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Tue, 14 May 2013 09:34:01 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals Message-ID: I am getting a 502 Bad Gateway on my server after any random intervals. If i run the following code the server starts working again: /usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -g www-data -f /usr/bin/php5-cgi -P /var/run/fastcgi-php.pid I have tried every possible way available on the internet. Can anyone help I am kinda newbie with server configs and stuff..!! Thanx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239181#msg-239181 From nginx-forum at nginx.us Tue May 14 14:17:13 2013 From: nginx-forum at nginx.us (vilsack) Date: Tue, 14 May 2013 10:17:13 -0400 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <20130507203617.GT27406@craic.sysops.org> References: <20130507203617.GT27406@craic.sysops.org> Message-ID: <10792ced3c3fdb14cfee9c44dd8cc584.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Sun, May 05, 2013 at 07:04:21AM -0400, zakaria wrote: > > Francis Daly Wrote: > > Hi there, > > > Thank you for confirm it. > > Its nginx bug #321 http://trac.nginx.org/nginx/ticket/321 > > Ah, good find -- I hadn't spotted that it was a "known issue". > > > location ~ [^/]\.php(/|$) { > > Just as another alternative, it is probably possible to use named > captures > in the "location" regex and avoid using fastcgi_split_path_info at all > -- > with everything up to and including ".php" being used as the script > name, > and something like "(/.*)?$" being the path info. > > But what you have here already looks like it should be working, so can > probably be left alone. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Francis, could you please show me an example? I'm encountering this issue on Ubuntu 12.04.2 LTS while trying to set up Phalcon; a c lib/framework for PHP. So far the other fixes in this thread haven't worked. Phalcon's Doc: http://docs.phalconphp.com/en/latest/reference/nginx.html I think Phalcon is dependent on how fastcgi_split_info works, so I'm trying to "replicate" how it handles the rewrite using regex. Am I going about this the wrong way? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238825,239155#msg-239155 From nginx-forum at nginx.us Tue May 14 14:24:02 2013 From: nginx-forum at nginx.us (d3f3kt) Date: Tue, 14 May 2013 10:24:02 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: References: Message-ID: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> Increase the pm.max_children, pm.start_servers, pm.min_spare_servers and pm.max_spare_servers in the www.conf this should help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239182#msg-239182 From reallfqq-nginx at yahoo.fr Tue May 14 17:05:06 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 14 May 2013 13:05:06 -0400 Subject: Debian package In-Reply-To: References: Message-ID: Thanks for your answer. I don't really see the problem. If the user also uses role-based sources (which is not the default setting in Squeeze, didn't check in Wheezy), if he tries to get the 'stable' Nginx package, then his system has already the 'stable' packages from the distribution, hence he satisfies all required dependencies. When 'stable' goes 'old-stable' and 'testing' goes 'stable', on his next upgrade, he'll pull all system packages from the new stable, at the same time he pulls the new Nginx package. In short: Someone can't pull the 'stable' Nginx package without having his/her system pulling the 'stable' packages from Debian. Of course, to support users which will need time to make the upgrade to the new stable, you will need to maintain an 'old-stable' package for the 'old-stable' ditribution, hence a version of Nginx that people can install on Squeeze (without spdy or openssl >= 1.0.1). I suppose that you'll freeze the 'old-stable' at some point (which you'll need to decide since 1.4 got out before Wheezy) and only rebuild the 'old-stable' package in cases of security issues. IMHO, there is no system incoherence scenario in which someone would try to install the Wheezy-built Nginx on Squeeze using the 'stable' role-based sources. --- *B. R.* On Sat, May 11, 2013 at 6:19 PM, Sergey Budnevitch wrote: > > On 11 May2013, at 07:52 , B.R. wrote: > > > Hello, > > > > Is Wheezy supported for Nginx? > > Not yet. I plan to build packages for wheezy and ubuntu 13.04 in a > week or two. > > > > > Would it be possible to use role-based package selection for the Nginx > APT source? > > No, I see potential problems here: nginx binary depends on certain > libraries versions, > for example nginx for wheezy will be built with spdy module and thus will > require > openssl >= 1.0.1. But what will be happened when testing becames stable > and stable - > oldstable? As far as I understand it may result in someone will install > wheezy package > on squeeze and it will not start at all. > > > > That means using: > > deb http://nginx.org/packages/debian/ stable > > nginx > > deb-src > > http://nginx.org/packages/debian/ stable nginx > > Instead of: > > deb http://nginx.org/packages/debian/ squeeze > > nginx > > deb-src > > http://nginx.org/packages/debian/ squeeze nginx > > Thanks, > > --- > > B. R. > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 14 18:07:04 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Tue, 14 May 2013 14:07:04 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for the help. It seems I don't have a www.conf in my nginx folder. This might sound naive but I am not able to find www.conf file. Upon search, I got to know that it should be in opt folder. But there is nothing in it. I have a nginx running on fcgi and Debian Squeeze. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239192#msg-239192 From nginx-forum at nginx.us Tue May 14 18:35:14 2013 From: nginx-forum at nginx.us (d3f3kt) Date: Tue, 14 May 2013 14:35:14 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> Oh sorry, that was my fault. I thought you are using php-fpm. If you are using fcgi than you could use /usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -g www-data -f /usr/bin/php5-cgi -P /var/run/fastcgi-php.pid -C 6 The number after -C is the amount of children. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239193#msg-239193 From francis at daoine.org Tue May 14 21:37:52 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 14 May 2013 22:37:52 +0100 Subject: Problem with fastcgi_split_path_info on ubuntu precise In-Reply-To: <10792ced3c3fdb14cfee9c44dd8cc584.NginxMailingListEnglish@forum.nginx.org> References: <20130507203617.GT27406@craic.sysops.org> <10792ced3c3fdb14cfee9c44dd8cc584.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130514213752.GA27406@craic.sysops.org> On Tue, May 14, 2013 at 10:17:13AM -0400, vilsack wrote: > Francis Daly Wrote: > > On Sun, May 05, 2013 at 07:04:21AM -0400, zakaria wrote: Hi there, > > > location ~ [^/]\.php(/|$) { > > > > Just as another alternative, it is probably possible to use named captures > > in the "location" regex and avoid using fastcgi_split_path_info at all > > -- with everything up to and including ".php" being used as the script > > name, and something like "(/.*)?$" being the path info. > Francis, could you please show me an example? location ~ ^(?.+\.php)(?/.*)?$ { is one possible way. Then use $script_name and $path_info as you see fit. > I'm encountering this issue on Ubuntu 12.04.2 LTS while trying to set up > Phalcon; a c lib/framework for PHP. So far the other fixes in this thread > haven't worked. "this issue" is "everything works fine, until you use try_files in the same location as fastcgi_split_path_info", unless I misunderstand things. It should be easy enough to test -- just comment out the try_files and see that your request gives the expected response. > Phalcon's Doc: http://docs.phalconphp.com/en/latest/reference/nginx.html > > I think Phalcon is dependent on how fastcgi_split_info works, so I'm trying > to "replicate" how it handles the rewrite using regex. Am I going about > this the wrong way? That Phalcon doc seems to suggest four distinct and different ways of configuring nginx, so that it works with four different ways of configuring Phalcon. None of them seem to use try_files in the same (useful) location as fastcgi_split_path_info. What is the one way that you have configured Phalcon? What is the matching one way that you have configured nginx? What request do you make, what response do you get, and what response do you expect? If your testing suggests that this is a different issue, it's probably worth creating a new thread for further responses. Cheers, f -- Francis Daly francis at daoine.org From svoop at delirium.ch Tue May 14 21:47:27 2013 From: svoop at delirium.ch (Svoop) Date: Tue, 14 May 2013 21:47:27 +0000 (UTC) Subject: error_page without content type Message-ID: Hi I'm setting a custom 404 page with: server { ... error_page 404 /errors/not-found.html; } And the page looks as follows:
Wmkt-logo

Seite nicht gefunden

Die angeforderte Seite existiert nicht.


Page non trouv?e

Cette page n'?xiste pas.


Page Not Found

This page does not exist.


This is what I get with "curl -i http://wemakeit.ch/cache/doesnotexist": HTTP/1.1 404 Not Found Transfer-Encoding: chunked Connection: keep-alive Keep-Alive: timeout=20 Status: 404 Not Found X-Request-Id: 6fc3d91efbddf9a5f9dbf32f6d4aff32 X-Runtime: 0.002343 Date: Tue, 14 May 2013 21:46:48 GMT X-Rack-Cache: miss X-Content-Type-Options: nosniff X-Powered-By: Phusion Passenger 4.0.2 Server: nginx/1.2.6 + Phusion Passenger 4.0.2 There's not Content-Type header, which is why Chrome shows the page source instead of rendering the HTML. Any idea what I'm doing wrong here? Thanks! From r1ch+nginx at teamliquid.net Tue May 14 22:09:11 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 14 May 2013 18:09:11 -0400 Subject: error_page without content type In-Reply-To: References: Message-ID: On Tue, May 14, 2013 at 5:47 PM, Svoop wrote: > Hi > > I'm setting a custom 404 page with: > > server { > ... > error_page 404 /errors/not-found.html; > } ... > > Any idea what I'm doing wrong here? > > Thanks! > Looks like your 404s are being generated by a backend, not by nginx. You may want to use fastcgi_intercept_errors / proxy_intercept_errors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 14 22:41:46 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Tue, 14 May 2013 18:41:46 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5412d3e43747b858447b12c594929595.NginxMailingListEnglish@forum.nginx.org> this is something that i do whenever this happens. and error goes away. but it happens again. about 3-4 times a day. is there a way to solve this permanently? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239194#msg-239194 From nginx-forum at nginx.us Tue May 14 22:46:02 2013 From: nginx-forum at nginx.us (d3f3kt) Date: Tue, 14 May 2013 18:46:02 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Use this command with the "-C 6" and than you should be happy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239204#msg-239204 From miguelmclara at gmail.com Tue May 14 23:05:14 2013 From: miguelmclara at gmail.com (Miguel Clara) Date: Wed, 15 May 2013 00:05:14 +0100 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I would suggest using php-fpm, its very easy to setup, in fact its integrated to recent releases of PHP (5.4++) On Tue, May 14, 2013 at 11:46 PM, d3f3kt wrote: > Use this command with the "-C 6" and than you should be happy > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239204#msg-239204 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed May 15 01:23:08 2013 From: nginx-forum at nginx.us (patng323) Date: Tue, 14 May 2013 21:23:08 -0400 Subject: Can I cache a page conditionally based on cookie value? Message-ID: Hi, Using NginX as a reverse proxy, I want to cache the response ONLY if the user has NOT logged in, which can be checked by testing the existence of a cookie. First I tried to use "if", but NginX complained that I cannot put a proxy_cache_valid inside an "if": location / { proxy_pass http://www.xyz.com; if ($cookie_login = '') { proxy_cache my-cache; proxy_cache_valid 200 301 302 60m; } } Then I tried to set the cache-duration to 0s by using a variable, but then NginX complained the time value is incorrect. set $cacheDuration 60m; if ($cookie_login != '') { set $cacheDuration 0s; } proxy_cache my-cache; proxy_cache_valid 200 301 302 $cacheDuration So is there any way to cache conditionally based on the existence of a cookie? Thanks. - Patrick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239207,239207#msg-239207 From nginx-forum at nginx.us Wed May 15 07:12:23 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Wed, 15 May 2013 03:12:23 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: References: Message-ID: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> I am using php 5.3.3 but still not able to find php-fpm in it!! http://trainingjunction.in/phpinfo.php where am i going wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239210#msg-239210 From nginx-forum at nginx.us Wed May 15 07:13:36 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Wed, 15 May 2013 03:13:36 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: References: <7a95598f1b0f11645091661565b2e7cf.NginxMailingListEnglish@forum.nginx.org> <37c875f70b89e6bd0f43bffa1a27d249.NginxMailingListEnglish@forum.nginx.org> <9ba275bb59511fc004640e03daa0ee5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <67ef64b1a487a25a5b9a662e82337bdc.NginxMailingListEnglish@forum.nginx.org> I am using php 5.3.3 and trying to install php-fpm and on research i came to know that php-fpm come bundled wirh 5.3.3 but still not able to find it. http://trainingjunction.in/phpinfo.php where am i going wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239211#msg-239211 From nginx-forum at nginx.us Wed May 15 07:16:46 2013 From: nginx-forum at nginx.us (d3f3kt) Date: Wed, 15 May 2013 03:16:46 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> References: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30c9e4f890a70cb0c4d17aed79682a5c.NginxMailingListEnglish@forum.nginx.org> Add those lines to yout /etc/apt/sources.list #php deb http://packages.dotdeb.org squeeze all deb-src http://packages.dotdeb.org squeeze all than run apt-get update apt-get upgrade apt-get install php5-fpm Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239212#msg-239212 From nginx-forum at nginx.us Wed May 15 07:41:08 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Wed, 15 May 2013 03:41:08 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <30c9e4f890a70cb0c4d17aed79682a5c.NginxMailingListEnglish@forum.nginx.org> References: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> <30c9e4f890a70cb0c4d17aed79682a5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Did everything you mentioned above but still it is showing after I run/etc/init.d/php-fpm start -bash: /etc/init.d/php-fpm: No such file or directory Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239213#msg-239213 From mdounin at mdounin.ru Wed May 15 09:18:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 May 2013 13:18:21 +0400 Subject: Can I cache a page conditionally based on cookie value? In-Reply-To: References: Message-ID: <20130515091821.GH69760@mdounin.ru> Hello! On Tue, May 14, 2013 at 09:23:08PM -0400, patng323 wrote: > Hi, > > Using NginX as a reverse proxy, I want to cache the response ONLY if the > user has NOT logged in, which can be checked by testing the existence of a > cookie. > > First I tried to use "if", but NginX complained that I cannot put a > proxy_cache_valid inside an "if": > > location / { > proxy_pass http://www.xyz.com; > if ($cookie_login = '') { > proxy_cache my-cache; > proxy_cache_valid 200 301 302 60m; > } > } > > > Then I tried to set the cache-duration to 0s by using a variable, but then > NginX complained the time value is incorrect. > set $cacheDuration 60m; > if ($cookie_login != '') { > set $cacheDuration 0s; > } > > proxy_cache my-cache; > proxy_cache_valid 200 301 302 $cacheDuration > > > So is there any way to cache conditionally based on the existence of a > cookie? Something like proxy_no_cache $cookie_login; should do the trick. See http://nginx.org/r/proxy_no_cache for details. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed May 15 09:43:44 2013 From: nginx-forum at nginx.us (darshan.choudhary) Date: Wed, 15 May 2013 05:43:44 -0400 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: References: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> <30c9e4f890a70cb0c4d17aed79682a5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9230c7a4a4da74ca09e0b84e0c7664af.NginxMailingListEnglish@forum.nginx.org> darshan.choudhary Wrote: ------------------------------------------------------- > Did everything you mentioned above but still it is showing after I > run/etc/init.d/php-fpm start > > -bash: /etc/init.d/php-fpm: No such file or directory I somehow managed to install fpm and now it is not starting. service php5-fpm restart gives me this error: Restarting PHP5 FastCGI Process Manager: php5-fpm failed! When i see the logs this is latest error that shows up: unable to bind listening socket for address '127.0.0.1:9000': Address already in use (98) Can you please point me anything that is going wrong..!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239181,239217#msg-239217 From miguelmclara at gmail.com Wed May 15 09:51:27 2013 From: miguelmclara at gmail.com (miguelmclara at gmail.com) Date: Wed, 15 May 2013 10:51:27 +0100 Subject: Nginx giving 502 Bad Gateway in random intervals In-Reply-To: <9230c7a4a4da74ca09e0b84e0c7664af.NginxMailingListEnglish@forum.nginx.org> References: <0df9f5f153e1638d69a586a7baace062.NginxMailingListEnglish@forum.nginx.org> <30c9e4f890a70cb0c4d17aed79682a5c.NginxMailingListEnglish@forum.nginx.org> <9230c7a4a4da74ca09e0b84e0c7664af.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130515095127.4063355.33904.738@gmail.com> An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 15 10:15:31 2013 From: nginx-forum at nginx.us (patng323) Date: Wed, 15 May 2013 06:15:31 -0400 Subject: Can I cache a page conditionally based on cookie value? In-Reply-To: <20130515091821.GH69760@mdounin.ru> References: <20130515091821.GH69760@mdounin.ru> Message-ID: <9face58367206a44de73c8f358720467.NginxMailingListEnglish@forum.nginx.org> Maxim, that's what I need. Thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239207,239220#msg-239220 From svoop at delirium.ch Wed May 15 11:20:26 2013 From: svoop at delirium.ch (Svoop) Date: Wed, 15 May 2013 11:20:26 +0000 (UTC) Subject: error_page without content type References: Message-ID: Richard Stanway writes: > Looks like your 404s are being generated by a backend, not by nginx. You may want to use?fastcgi_intercept_errors /?proxy_intercept_errors. You're right, but since the pages are served via Passenger, these directives won't work. However, I found a way to pass the correct content type in the routing table. Thanks! From andrejaenisch at googlemail.com Wed May 15 13:08:39 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Wed, 15 May 2013 15:08:39 +0200 Subject: error_page without content type In-Reply-To: References: Message-ID: 2013/5/15 Svoop : > However, I found a way to pass the correct content type in the routing table. ? which you may want to share, so that later people can look for a working solution? Regards, Andre From g.plumb at gmail.com Wed May 15 15:47:30 2013 From: g.plumb at gmail.com (Gee) Date: Wed, 15 May 2013 16:47:30 +0100 Subject: Nginx + Mono (OpenBSD 5.3) Message-ID: OK - I can confirm that nginx is in fact chroot(8) - After much playing around, I noticed a comment in the ports documentation that mentions this (I know, I know...) So to solve this particular problem, I just moved my socket to /var/www/ and ensured that my user had the appropriate permissions. I am now able to serve a simple aspx - although mvc still remains a challenge. Thanks to everyone who posted! Thanks G -------------- next part -------------- An HTML attachment was scrubbed... URL: From casey at scottmail.org Wed May 15 17:04:43 2013 From: casey at scottmail.org (Casey Scott) Date: Wed, 15 May 2013 10:04:43 -0700 (PDT) Subject: access log time format In-Reply-To: <433869205.390.1368636705646.JavaMail.root@phantombsd.org> References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> Message-ID: <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> Is it possible to format the time Nginx uses in access.log to match this? "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 My goal is to have Nginx access logs match the time format of the rest of our environment so that monitoring tools/dashboards/etc. can adopt Nginx's access logs. From what I've read, it doesn't seem to be possible, but the posts I've seen are pretty old. Casey From cnst++ at FreeBSD.org Wed May 15 16:10:04 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Wed, 15 May 2013 09:10:04 -0700 Subject: access log time format In-Reply-To: <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> Message-ID: Yes, it is possible. See http://nginx.org/r/log_format, or, more specifically, $time_iso8601 C. On 15 May 2013 10:04, Casey Scott wrote: > Is it possible to format the time Nginx uses in access.log to match this? > > "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 > > My goal is to have Nginx access logs match the time format of the rest of our > environment so that monitoring tools/dashboards/etc. can adopt Nginx's > access logs. From what I've read, it doesn't seem to be possible, but the > posts I've seen are pretty old. > > > Casey From casey at scottmail.org Wed May 15 17:13:29 2013 From: casey at scottmail.org (Casey Scott) Date: Wed, 15 May 2013 10:13:29 -0700 (PDT) Subject: access log time format In-Reply-To: References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> Message-ID: <1617072190.412.1368638009520.JavaMail.root@phantombsd.org> >From what I can tell, iso_8601 is a specific format. Do you mean that I can manipulate it? Thanks, Casey ----- Original Message ----- > Yes, it is possible. > > See http://nginx.org/r/log_format, or, more specifically, $time_iso8601 > > C. > > On 15 May 2013 10:04, Casey Scott wrote: > > Is it possible to format the time Nginx uses in access.log to match this? > > > > "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 > > > > My goal is to have Nginx access logs match the time format of the rest of > > our > > environment so that monitoring tools/dashboards/etc. can adopt Nginx's > > access logs. From what I've read, it doesn't seem to be possible, but the > > posts I've seen are pretty old. > > > > > > Casey > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From cnst++ at FreeBSD.org Wed May 15 16:18:44 2013 From: cnst++ at FreeBSD.org (Constantine A. Murenin) Date: Wed, 15 May 2013 09:18:44 -0700 Subject: access log time format In-Reply-To: <1617072190.412.1368638009520.JavaMail.root@phantombsd.org> References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> <1617072190.412.1368638009520.JavaMail.root@phantombsd.org> Message-ID: On 15 May 2013 10:13, Casey Scott wrote: > From what I can tell, iso_8601 is a specific format. Do you mean that I can manipulate it? > > Thanks, > Casey Yes, it's a specific format. You seem to like ISO8601, without knowing about it being named so. :p C. > ----- Original Message ----- >> Yes, it is possible. >> >> See http://nginx.org/r/log_format, or, more specifically, $time_iso8601 >> >> C. >> >> On 15 May 2013 10:04, Casey Scott wrote: >> > Is it possible to format the time Nginx uses in access.log to match this? >> > >> > "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 >> > >> > My goal is to have Nginx access logs match the time format of the rest of >> > our >> > environment so that monitoring tools/dashboards/etc. can adopt Nginx's >> > access logs. From what I've read, it doesn't seem to be possible, but the >> > posts I've seen are pretty old. >> > >> > >> > Casey From casey at scottmail.org Wed May 15 17:23:41 2013 From: casey at scottmail.org (Casey Scott) Date: Wed, 15 May 2013 10:23:41 -0700 (PDT) Subject: access log time format In-Reply-To: References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> <1617072190.412.1368638009520.JavaMail.root@phantombsd.org> Message-ID: <1333434002.426.1368638621850.JavaMail.root@phantombsd.org> That format doesn't fulfill my need though. It's close.. but not quite a match. I need this format: ----- Original Message ----- > On 15 May 2013 10:13, Casey Scott wrote: > > From what I can tell, iso_8601 is a specific format. Do you mean that I can > > manipulate it? > > > > Thanks, > > Casey > > Yes, it's a specific format. > > You seem to like ISO8601, without knowing about it being named so. :p > > C. > > > ----- Original Message ----- > >> Yes, it is possible. > >> > >> See http://nginx.org/r/log_format, or, more specifically, $time_iso8601 > >> > >> C. > >> > >> On 15 May 2013 10:04, Casey Scott wrote: > >> > Is it possible to format the time Nginx uses in access.log to match > >> > this? > >> > > >> > "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 > >> > > >> > My goal is to have Nginx access logs match the time format of the rest > >> > of > >> > our > >> > environment so that monitoring tools/dashboards/etc. can adopt Nginx's > >> > access logs. From what I've read, it doesn't seem to be possible, but > >> > the > >> > posts I've seen are pretty old. > >> > > >> > > >> > Casey > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From casey at scottmail.org Wed May 15 17:23:57 2013 From: casey at scottmail.org (Casey Scott) Date: Wed, 15 May 2013 10:23:57 -0700 (PDT) Subject: access log time format In-Reply-To: <1333434002.426.1368638621850.JavaMail.root@phantombsd.org> References: <1251138148.361.1368634895847.JavaMail.root@phantombsd.org> <433869205.390.1368636705646.JavaMail.root@phantombsd.org> <1138394829.407.1368637483645.JavaMail.root@phantombsd.org> <1617072190.412.1368638009520.JavaMail.root@phantombsd.org> <1333434002.426.1368638621850.JavaMail.root@phantombsd.org> Message-ID: <86098803.427.1368638637630.JavaMail.root@phantombsd.org> "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 ----- Original Message ----- > That format doesn't fulfill my need though. It's close.. but not quite a > match. I need this format: > > > ----- Original Message ----- > > On 15 May 2013 10:13, Casey Scott wrote: > > > From what I can tell, iso_8601 is a specific format. Do you mean that I > > > can > > > manipulate it? > > > > > > Thanks, > > > Casey > > > > Yes, it's a specific format. > > > > You seem to like ISO8601, without knowing about it being named so. :p > > > > C. > > > > > ----- Original Message ----- > > >> Yes, it is possible. > > >> > > >> See http://nginx.org/r/log_format, or, more specifically, $time_iso8601 > > >> > > >> C. > > >> > > >> On 15 May 2013 10:04, Casey Scott wrote: > > >> > Is it possible to format the time Nginx uses in access.log to match > > >> > this? > > >> > > > >> > "%Y-%m-%d %H:%M:%S" or 2013-05-14 15:40:21 > > >> > > > >> > My goal is to have Nginx access logs match the time format of the rest > > >> > of > > >> > our > > >> > environment so that monitoring tools/dashboards/etc. can adopt Nginx's > > >> > access logs. From what I've read, it doesn't seem to be possible, but > > >> > the > > >> > posts I've seen are pretty old. > > >> > > > >> > > > >> > Casey > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > From nginx-forum at nginx.us Wed May 15 17:11:56 2013 From: nginx-forum at nginx.us (0liver) Date: Wed, 15 May 2013 13:11:56 -0400 Subject: Handling nginx's too many open files even I have the correct ulimit In-Reply-To: References: Message-ID: <82d03f5b8c004ecd5438671cb7e2360c.NginxMailingListEnglish@forum.nginx.org> I'm facing the same problem here, but I found much lower settings on our machine (a VPS running Ubuntu 12.04): # the hard limit of open files www-data at 215247:~$ ulimit -Hn 4096 # the soft limit of open files www-data at 215247:~$ ulimit -Sn 1024 # maximum number of file descriptors enforced on a kernel level # for more info see: http://serverfault.com/q/122679/88043 root at 215247:~# cat /proc/sys/fs/file-max 262144 So I would hope that setting the ulimit values for max open files per user to 262144 (H) and 131072 (S) will help. But then again, Howard Chen mentioned, that he's facing this problem despite such high values on his system. Any suggestions? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,234191,239248#msg-239248 From nginx-forum at nginx.us Thu May 16 07:32:40 2013 From: nginx-forum at nginx.us (apeman) Date: Thu, 16 May 2013 03:32:40 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: References: Message-ID: <93448067999aa47e8062088d40b07611.NginxMailingListEnglish@forum.nginx.org> I met the same problem. Client got http 200 response, but the Content-Length: 0. I never define this. I don't why? Is there anybody know how to fix it? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239252#msg-239252 From stageline at gmail.com Thu May 16 07:32:43 2013 From: stageline at gmail.com (suttya) Date: Thu, 16 May 2013 09:32:43 +0200 Subject: bug? Message-ID: Hy all! I have this error and i don't know how to slove it. https://gist.github.com/Stageline/9a643fa13d040d49f1d1 Short details: I try request url with curl seems everithing OK, getting http 200 code, but if i try this url with external program (look program link down) getting internal server error 500. https://raw.github.com/arut/nginx-rtmp-module/master/ngx_rtmp_netcall_module.c Missing some required http request header or content length wrong? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 16 08:42:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 May 2013 12:42:40 +0400 Subject: bug? In-Reply-To: References: Message-ID: <20130516084240.GM69760@mdounin.ru> Hello! On Thu, May 16, 2013 at 09:32:43AM +0200, suttya wrote: > Hy all! > > I have this error and i don't know how to slove it. > > https://gist.github.com/Stageline/9a643fa13d040d49f1d1 > > Short details: I try request url with curl seems everithing OK, getting > http 200 code, but if i try this url with external program (look program > link down) getting internal server error 500. > > https://raw.github.com/arut/nginx-rtmp-module/master/ngx_rtmp_netcall_module.c > > Missing some required http request header or content length wrong? The response appears to be returned by a fastcgi backend, from php code. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu May 16 08:48:33 2013 From: nginx-forum at nginx.us (apeman) Date: Thu, 16 May 2013 04:48:33 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <93448067999aa47e8062088d40b07611.NginxMailingListEnglish@forum.nginx.org> References: <93448067999aa47e8062088d40b07611.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2a37286e4d6e9767636e65dfc32449d3.NginxMailingListEnglish@forum.nginx.org> It's not always happen, the possibility of this phenomenon is 0.081%. I havn't any idea about this... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239255#msg-239255 From stageline at gmail.com Thu May 16 11:12:52 2013 From: stageline at gmail.com (suttya) Date: Thu, 16 May 2013 13:12:52 +0200 Subject: bug? In-Reply-To: <20130516084240.GM69760@mdounin.ru> References: <20130516084240.GM69760@mdounin.ru> Message-ID: It possible, but nothing error found in php5-fpm.log or phperr.log. If this is php error, why can't reproduce it by load url in my browser or curl? thanks 2013/5/16 Maxim Dounin > Hello! > > On Thu, May 16, 2013 at 09:32:43AM +0200, suttya wrote: > > > Hy all! > > > > I have this error and i don't know how to slove it. > > > > https://gist.github.com/Stageline/9a643fa13d040d49f1d1 > > > > Short details: I try request url with curl seems everithing OK, getting > > http 200 code, but if i try this url with external program (look program > > link down) getting internal server error 500. > > > > > https://raw.github.com/arut/nginx-rtmp-module/master/ngx_rtmp_netcall_module.c > > > > Missing some required http request header or content length wrong? > > The response appears to be returned by a fastcgi backend, from php > code. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stageline at gmail.com Thu May 16 11:34:04 2013 From: stageline at gmail.com (suttya) Date: Thu, 16 May 2013 13:34:04 +0200 Subject: bug? In-Reply-To: References: <20130516084240.GM69760@mdounin.ru> Message-ID: CURL test log: https://gist.github.com/Stageline/59f79c82d2c3e84820f5 https://gist.github.com/Stageline/774a8082bc5238896eb9 2013/5/16 suttya > It possible, but nothing error found in php5-fpm.log or phperr.log. > If this is php error, why can't reproduce it by load url in my browser or > curl? > > thanks > > > 2013/5/16 Maxim Dounin > >> Hello! >> >> On Thu, May 16, 2013 at 09:32:43AM +0200, suttya wrote: >> >> > Hy all! >> > >> > I have this error and i don't know how to slove it. >> > >> > https://gist.github.com/Stageline/9a643fa13d040d49f1d1 >> > >> > Short details: I try request url with curl seems everithing OK, getting >> > http 200 code, but if i try this url with external program (look program >> > link down) getting internal server error 500. >> > >> > >> https://raw.github.com/arut/nginx-rtmp-module/master/ngx_rtmp_netcall_module.c >> > >> > Missing some required http request header or content length wrong? >> >> The response appears to be returned by a fastcgi backend, from php >> code. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From griscom at suitable.com Thu May 16 11:41:28 2013 From: griscom at suitable.com (Daniel Griscom) Date: Thu, 16 May 2013 07:41:28 -0400 Subject: Add [nginx] to subject lines on this mailing list? Message-ID: This mailing list is run by Gnu's Mailman application. The default configuration for Mailman adds a "[NameOfMailList]" prefix to the subject of every sent email, e.g. >Subject: [Congregation] Tuesday's Notes This makes it very easy to sort out my inbox, and gives my spam filter something to key on. On the nginx mailing list this has been turned off, so I find it hard to figure out why someone unknown is writing me about "bug?", and often find the list emails in my junk mail folder. I'd like to have the setting turned back on; would that be OK? Thanks, Dan -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From mdounin at mdounin.ru Thu May 16 11:43:02 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 May 2013 15:43:02 +0400 Subject: bug? In-Reply-To: References: <20130516084240.GM69760@mdounin.ru> Message-ID: <20130516114301.GO69760@mdounin.ru> Hello! On Thu, May 16, 2013 at 01:12:52PM +0200, suttya wrote: > It possible, but nothing error found in php5-fpm.log or phperr.log. > If this is php error, why can't reproduce it by load url in my browser or > curl? Try checking what goes on in your php code. -- Maxim Dounin http://nginx.org/en/donation.html From stageline at gmail.com Thu May 16 11:57:25 2013 From: stageline at gmail.com (suttya) Date: Thu, 16 May 2013 13:57:25 +0200 Subject: bug? In-Reply-To: <20130516114301.GO69760@mdounin.ru> References: <20130516084240.GM69760@mdounin.ru> <20130516114301.GO69760@mdounin.ru> Message-ID: I don't have any error in my php code. Please explain it? 2013/5/16 Maxim Dounin > Hello! > > On Thu, May 16, 2013 at 01:12:52PM +0200, suttya wrote: > > > It possible, but nothing error found in php5-fpm.log or phperr.log. > > If this is php error, why can't reproduce it by load url in my browser or > > curl? > > Try checking what goes on in your php code. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu May 16 11:59:59 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 16 May 2013 15:59:59 +0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: Message-ID: <5194CA3F.8020601@nginx.com> On 5/16/13 3:41 PM, Daniel Griscom wrote: > This mailing list is run by Gnu's Mailman application. The default > configuration for Mailman adds a "[NameOfMailList]" prefix to the > subject of every sent email, e.g. > >> Subject: [Congregation] Tuesday's Notes > > This makes it very easy to sort out my inbox, and gives my spam > filter something to key on. > > On the nginx mailing list this has been turned off, so I find it > hard to figure out why someone unknown is writing me about "bug?", > and often find the list emails in my junk mail folder. > > I'd like to have the setting turned back on; would that be OK? > Doesn't List-Id header suit your needs? -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From griscom at suitable.com Thu May 16 12:10:35 2013 From: griscom at suitable.com (Daniel Griscom) Date: Thu, 16 May 2013 08:10:35 -0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: <5194CA3F.8020601@nginx.com> References: <5194CA3F.8020601@nginx.com> Message-ID: List-Id isn't shown in my inbox listing, so that doesn't help me when I'm scanning my inbox. Even when I open the email I have to scan the headers to figure out just what the specific email is about. All (almost?) of my other mailing lists follow this convention, which makes sense since every email from the "nginx" mailing list has to do with nginx, but few people bother to put "nginx" in the subject lines of their posts. Without this there's an assumed context for the message that isn't clear from the message subject. So, personally I'd like to have it turned on, but if there's a reason to keep it off then that's fine by me. Dan At 3:59 PM +0400 5/16/13, Maxim Konovalov wrote: >On 5/16/13 3:41 PM, Daniel Griscom wrote: >> This mailing list is run by Gnu's Mailman application. The default >> configuration for Mailman adds a "[NameOfMailList]" prefix to the >> subject of every sent email, e.g. >> >>> Subject: [Congregation] Tuesday's Notes >> >> This makes it very easy to sort out my inbox, and gives my spam >> filter something to key on. >> >> On the nginx mailing list this has been turned off, so I find it >> hard to figure out why someone unknown is writing me about "bug?", >> and often find the list emails in my junk mail folder. >> >> I'd like to have the setting turned back on; would that be OK? > > >Doesn't List-Id header suit your needs? > >-- >Maxim Konovalov >+7 (910) 4293178 >http://nginx.com/services.html -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From mdounin at mdounin.ru Thu May 16 12:59:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 May 2013 16:59:10 +0400 Subject: bug? In-Reply-To: References: <20130516084240.GM69760@mdounin.ru> <20130516114301.GO69760@mdounin.ru> Message-ID: <20130516125910.GP69760@mdounin.ru> Hello! On Thu, May 16, 2013 at 01:57:25PM +0200, suttya wrote: > I don't have any error in my php code. Please explain it? Again: the error is returned by a php backend. I.e., the error is returned by either php itself or your code. Adding some debug logging to your php code might help to track the problem. In any case it's doesn't looks like an nginx problem. -- Maxim Dounin http://nginx.org/en/donation.html From jim at ohlste.in Thu May 16 13:18:01 2013 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 16 May 2013 09:18:01 -0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: <5194CA3F.8020601@nginx.com> Message-ID: <5194DC89.1050905@ohlste.in> On 05/16/13 08:10, Daniel Griscom wrote: > List-Id isn't shown in my inbox listing, so that doesn't help me when > I'm scanning my inbox. Even when I open the email I have to scan the > headers to figure out just what the specific email is about. > > All (almost?) of my other mailing lists follow this convention, which > makes sense since every email from the "nginx" mailing list has to do > with nginx, but few people bother to put "nginx" in the subject lines of > their posts. Without this there's an assumed context for the message > that isn't clear from the message subject. I think what Maxim was alluding to is that any decent email client will sort messages for you based on headers if you set it do do so. This way you don't need to scan your entire inbox for messages from a particular list and the "assumed context" can be a somewhat safe assumption. Since you mention the conventions followed in other mailing lists, and you read this one, perhaps you should note that top posting is discouraged on this list, and messages are answered inline by the developers (as your original one was). Just a thought. > > > So, personally I'd like to have it turned on, but if there's a reason to > keep it off then that's fine by me. > > > Dan > > > At 3:59 PM +0400 5/16/13, Maxim Konovalov wrote: >> On 5/16/13 3:41 PM, Daniel Griscom wrote: >>> This mailing list is run by Gnu's Mailman application. The default >>> configuration for Mailman adds a "[NameOfMailList]" prefix to the >>> subject of every sent email, e.g. >>> >>>> Subject: [Congregation] Tuesday's Notes >>> >>> This makes it very easy to sort out my inbox, and gives my spam >>> filter something to key on. >>> >>> On the nginx mailing list this has been turned off, so I find it >>> hard to figure out why someone unknown is writing me about "bug?", >>> and often find the list emails in my junk mail folder. >>> >>> I'd like to have the setting turned back on; would that be OK? >> > >> Doesn't List-Id header suit your needs? >> >> -- >> Maxim Konovalov >> +7 (910) 4293178 >> http://nginx.com/services.html > > -- Jim Ohlstein From griscom at suitable.com Thu May 16 13:26:24 2013 From: griscom at suitable.com (Daniel Griscom) Date: Thu, 16 May 2013 09:26:24 -0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: <5194DC89.1050905@ohlste.in> References: <5194CA3F.8020601@nginx.com> <5194DC89.1050905@ohlste.in> Message-ID: At 9:18 AM -0400 5/16/13, Jim Ohlstein wrote: >On 05/16/13 08:10, Daniel Griscom wrote: >>List-Id isn't shown in my inbox listing, so that doesn't help me when >>I'm scanning my inbox. Even when I open the email I have to scan the >>headers to figure out just what the specific email is about. >> >>All (almost?) of my other mailing lists follow this convention, which >>makes sense since every email from the "nginx" mailing list has to do >>with nginx, but few people bother to put "nginx" in the subject lines of >>their posts. Without this there's an assumed context for the message >>that isn't clear from the message subject. > >I think what Maxim was alluding to is that any decent email client >will sort messages for you based on headers if you set it do do so. >This way you don't need to scan your entire inbox for messages from >a particular list and the "assumed context" can be a somewhat safe >assumption. OK; I haven't seen an email client like that, but if that's most people's experience then that's fine by me. >Since you mention the conventions followed in other mailing lists, >and you read this one, perhaps you should note that top posting is >discouraged on this list, and messages are answered inline by the >developers (as your original one was). Just a thought. OK, will do. Dan >> >> >>So, personally I'd like to have it turned on, but if there's a reason to >>keep it off then that's fine by me. >> >> >>Dan >> >> >>At 3:59 PM +0400 5/16/13, Maxim Konovalov wrote: >>>On 5/16/13 3:41 PM, Daniel Griscom wrote: >>>> This mailing list is run by Gnu's Mailman application. The default >>>> configuration for Mailman adds a "[NameOfMailList]" prefix to the >>>> subject of every sent email, e.g. >>>> >>>>> Subject: [Congregation] Tuesday's Notes >>>> >>>> This makes it very easy to sort out my inbox, and gives my spam >>>> filter something to key on. >>>> >>>> On the nginx mailing list this has been turned off, so I find it >>>> hard to figure out why someone unknown is writing me about "bug?", >>>> and often find the list emails in my junk mail folder. >>>> >>>> I'd like to have the setting turned back on; would that be OK? >>> > >>>Doesn't List-Id header suit your needs? >>> >>>-- >>>Maxim Konovalov >>>+7 (910) 4293178 >>>http://nginx.com/services.html >> >> > > >-- >Jim Ohlstein > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From lists at necoro.eu Thu May 16 13:34:14 2013 From: lists at necoro.eu (=?ISO-8859-1?Q?Ren=E9_Neumann?=) Date: Thu, 16 May 2013 15:34:14 +0200 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: <5194DC89.1050905@ohlste.in> References: <5194CA3F.8020601@nginx.com> <5194DC89.1050905@ohlste.in> Message-ID: <5194E056.4020107@necoro.eu> Am 16.05.2013 15:18, schrieb Jim Ohlstein: > I think what Maxim was alluding to is that any decent email client will > sort messages for you based on headers if you set it do do so. This way > you don't need to scan your entire inbox for messages from a particular > list and the "assumed context" can be a somewhat safe assumption. As an alternative, use a mail-server which supports server-side sorting. For example using Sieve. - Ren? From griscom at suitable.com Thu May 16 15:07:28 2013 From: griscom at suitable.com (Daniel Griscom) Date: Thu, 16 May 2013 11:07:28 -0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: <5194E056.4020107@necoro.eu> References: <5194CA3F.8020601@nginx.com> <5194DC89.1050905@ohlste.in> <5194E056.4020107@necoro.eu> Message-ID: At 3:34 PM +0200 5/16/13, Ren? Neumann wrote: >Am 16.05.2013 15:18, schrieb Jim Ohlstein: >> I think what Maxim was alluding to is that any decent email client will >> sort messages for you based on headers if you set it do do so. This way >> you don't need to scan your entire inbox for messages from a particular >> list and the "assumed context" can be a somewhat safe assumption. > >As an alternative, use a mail-server which supports server-side sorting. >For example using Sieve. Sorry; I didn't think my suggestion would be all that controversial. As a data point, I checked through my email archive for Mailman-based mailing list messages which had or didn't have a [listName] subject prefix: - 2288 messages with a [listName] subject prefix - 20 messages without a [listName] subject prefix, of which 15 were nginx postings So, omitting the prefix is an unusual choice, but if it's necessary then that's fine. Thanks for responding, Dan -- Daniel T. Griscom griscom at suitable.com Suitable Systems http://www.suitable.com/ 1 Centre Street, Suite 204 (781) 665-0053 Wakefield, MA 01880-2400 From lists-nginx at swsystem.co.uk Thu May 16 15:35:58 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 16 May 2013 16:35:58 +0100 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: <5194CA3F.8020601@nginx.com> <5194DC89.1050905@ohlste.in> <5194E056.4020107@necoro.eu> Message-ID: On 2013-05-16 16:07, Daniel Griscom wrote: > At 3:34 PM +0200 5/16/13, Ren? Neumann wrote: > Am 16.05.2013 15:18, schrieb Jim Ohlstein: > I think what Maxim was alluding to is that any decent email client will > sort messages for you based on headers if you set it do do so. This way > you don't need to scan your entire inbox for messages from a particular > list and the "assumed context" can be a somewhat safe assumption. > > As an alternative, use a mail-server which supports server-side > sorting. > For example using Sieve. > > Sorry; I didn't think my suggestion would be all that controversial. > As a data point, I checked through my email archive for Mailman-based > mailing list messages which had or didn't have a [listName] subject > prefix: > > - 2288 messages with a [listName] subject prefix > > - 20 messages without a [listName] subject prefix, of which 15 were > nginx postings I can't believe that you've got what looks like 2000+ emails hitting your inbox each day and you're not using anything to filter them into folders. I've got to the point now where I perhaps have 3-4 emails a week into my inbox but 1000's scattered into various folders for mailing lists and even down to certain people having their own filters. Personally I don't mind if the subject has the mailman [listname] or not, as long as there's some way for me to filter it. Filtering into folders also means I can choose when I want to look at certain emails. I may want to read one from my accountant, about my tax return, before scanning through the kernel mailing list or nginx for example. > > > So, omitting the prefix is an unusual choice, but if it's necessary > then that's fine. > > > Thanks for responding, > Dan From nginx-forum at nginx.us Thu May 16 16:02:25 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 16 May 2013 12:02:25 -0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: Message-ID: +1 i use a normal email-client in the office that sorts nginx-ml-mails (and any other ml i'm subscribed to) into folders, but when i'm abroad or in a client's office i'm usually use a webmail-client, and skimming over my mails is much easier with an [nginx]; a mail-subject like "Debian Package" could also be from a customer,, while "[nginx] Debian Package" would kinda be ignored. or something like " bug?" inb4 server-based-filters: not always an option in a company-environment or with hosted mails. what's comfortable for alice is not always fine for bob. my 2 cent regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239264,239278#msg-239278 From andrejaenisch at googlemail.com Thu May 16 18:56:00 2013 From: andrejaenisch at googlemail.com (Andre Jaenisch) Date: Thu, 16 May 2013 20:56:00 +0200 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: Message-ID: Why you don't simply filter by adress? nginx at nginx.org seems pretty unique to me ? From lists at necoro.eu Thu May 16 23:53:33 2013 From: lists at necoro.eu (=?ISO-8859-1?Q?Ren=E9_Neumann?=) Date: Fri, 17 May 2013 01:53:33 +0200 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: Message-ID: <5195717D.2070503@necoro.eu> Am 16.05.2013 18:02, schrieb mex: > inb4 server-based-filters: not always an option in a company-environment or > with hosted mails. > > what's comfortable for alice is not always fine for bob. That's true, of course. My thoughts were just along the line of "people caring about webservers have a higher probability of running (or being able to run) their own mailserver". And also I just wanted to point out an alternative, when you happen to have a client without filters (like webmail). - Ren? From jaderhs5 at gmail.com Fri May 17 12:45:56 2013 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Fri, 17 May 2013 09:45:56 -0300 Subject: Handling nginx's too many open files even I have the correct ulimit In-Reply-To: <82d03f5b8c004ecd5438671cb7e2360c.NginxMailingListEnglish@forum.nginx.org> References: <82d03f5b8c004ecd5438671cb7e2360c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Try setting worker_rlimit_nofile Jader H. 2013/5/15 0liver > I'm facing the same problem here, but I found much lower settings on our > machine (a VPS running Ubuntu 12.04): > > # the hard limit of open files > www-data at 215247:~$ ulimit -Hn > 4096 > # the soft limit of open files > www-data at 215247:~$ ulimit -Sn > 1024 > # maximum number of file descriptors enforced on a kernel level > # for more info see: http://serverfault.com/q/122679/88043 > root at 215247:~# cat /proc/sys/fs/file-max > 262144 > > So I would hope that setting the ulimit values for max open files per user > to 262144 (H) and 131072 (S) will help. > > But then again, Howard Chen mentioned, that he's facing this problem > despite > such high values on his system. > > Any suggestions? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,234191,239248#msg-239248 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruno.premont at restena.lu Fri May 17 12:57:00 2013 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Fri, 17 May 2013 14:57:00 +0200 Subject: Choosing source-address for upstream connections Message-ID: <20130517145700.11b00506@pluto.restena.lu> Is there a way to tell nginx to use a specific address when talking to an upstream? I would like to do something like: upstream bla { server [D0C::1234]:8080 src [D0C::beaf]; server 1.2.3.4:8080 src 1.2.3.5; server upstream.example.tld src site.example.tld; } My system has multiple local addresses and just one of them should be used to contact the given upstream. If host-name is provided the source address should be chosen of the same address family as the upstream address (so it's not necessary to explicitly state IP addresses and duplicate server entries for IPv4 and IPv6). It's possible to do it via "src" attribute of routes though it would be more clean to do it on the application side. e.g. ip route add D0C::1234/128 src D0C::beaf ethX ip route add 1.2.3.4/32 src 1.2.3.5 Bruno From mdounin at mdounin.ru Fri May 17 13:04:11 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 May 2013 17:04:11 +0400 Subject: Add [nginx] to subject lines on this mailing list? In-Reply-To: References: <5194CA3F.8020601@nginx.com> <5194DC89.1050905@ohlste.in> <5194E056.4020107@necoro.eu> Message-ID: <20130517130411.GV69760@mdounin.ru> Hello! On Thu, May 16, 2013 at 11:07:28AM -0400, Daniel Griscom wrote: > At 3:34 PM +0200 5/16/13, Ren? Neumann wrote: > >Am 16.05.2013 15:18, schrieb Jim Ohlstein: > >> I think what Maxim was alluding to is that any decent email client will > >> sort messages for you based on headers if you set it do do so. This way > >> you don't need to scan your entire inbox for messages from a particular > >> list and the "assumed context" can be a somewhat safe assumption. > > > >As an alternative, use a mail-server which supports server-side sorting. > >For example using Sieve. > > Sorry; I didn't think my suggestion would be all that controversial. > As a data point, I checked through my email archive for > Mailman-based mailing list messages which had or didn't have a > [listName] subject prefix: > > - 2288 messages with a [listName] subject prefix > > - 20 messages without a [listName] subject prefix, of which 15 were > nginx postings > > > So, omitting the prefix is an unusual choice, but if it's necessary > then that's fine. >From about 10 mailing lists I'm subscribed to (nginx, memcached, mercurial, various freebsd lists, ...) only nginx-announce@ and nginx-ru-announce@ has prefix added. So from my point of view prefix isn't something common, and mostly used for low-traffic lists. Overral I don't think that adding a prefix for nginx@ (nginx-ru@, nginx-devel@) will make me happy. -- Maxim Dounin http://nginx.org/en/donation.html From ru at nginx.com Fri May 17 13:17:20 2013 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 17 May 2013 17:17:20 +0400 Subject: Choosing source-address for upstream connections In-Reply-To: <20130517145700.11b00506@pluto.restena.lu> References: <20130517145700.11b00506@pluto.restena.lu> Message-ID: <20130517131720.GE25959@lo0.su> On Fri, May 17, 2013 at 02:57:00PM +0200, Bruno Pr?mont wrote: > Is there a way to tell nginx to use a specific address when talking to > an upstream? > > I would like to do something like: > > upstream bla { > server [D0C::1234]:8080 src [D0C::beaf]; > server 1.2.3.4:8080 src 1.2.3.5; > server upstream.example.tld src site.example.tld; > } > > > My system has multiple local addresses and just one of them should be > used to contact the given upstream. > > > If host-name is provided the source address should be chosen of the > same address family as the upstream address (so it's not necessary to > explicitly state IP addresses and duplicate server entries for IPv4 > and IPv6). > > > It's possible to do it via "src" attribute of routes though it would be > more clean to do it on the application side. > e.g. > ip route add D0C::1234/128 src D0C::beaf ethX > ip route add 1.2.3.4/32 src 1.2.3.5 http://nginx.org/r/proxy_bind This is the only option so far. Per-server per-upstream source address selection isn't currently possible. From bruno.premont at restena.lu Fri May 17 13:27:42 2013 From: bruno.premont at restena.lu (Bruno =?UTF-8?B?UHLDqW1vbnQ=?=) Date: Fri, 17 May 2013 15:27:42 +0200 Subject: Choosing source-address for upstream connections In-Reply-To: <20130517131720.GE25959@lo0.su> References: <20130517145700.11b00506@pluto.restena.lu> <20130517131720.GE25959@lo0.su> Message-ID: <20130517152742.7755c2ac@pluto.restena.lu> On Fri, 17 May 2013 17:17:20 +0400 Ruslan Ermilov wrote: > On Fri, May 17, 2013 at 02:57:00PM +0200, Bruno Pr?mont wrote: > > Is there a way to tell nginx to use a specific address when talking to > > an upstream? > > > > I would like to do something like: > > > > upstream bla { > > server [D0C::1234]:8080 src [D0C::beaf]; > > server 1.2.3.4:8080 src 1.2.3.5; > > server upstream.example.tld src site.example.tld; > > } > > > > > > My system has multiple local addresses and just one of them should be > > used to contact the given upstream. > > > > > > If host-name is provided the source address should be chosen of the > > same address family as the upstream address (so it's not necessary to > > explicitly state IP addresses and duplicate server entries for IPv4 > > and IPv6). > > > > > > It's possible to do it via "src" attribute of routes though it would be > > more clean to do it on the application side. > > e.g. > > ip route add D0C::1234/128 src D0C::beaf ethX > > ip route add 1.2.3.4/32 src 1.2.3.5 > > http://nginx.org/r/proxy_bind > > This is the only option so far. Per-server per-upstream source address > selection isn't currently possible. How does that one behave with regard to IPv4 versus IPv6? The documentation says "address" which I guess can be either IPv4 or IPv6 but not both so the right one gets choosen depending on the destination address format? Or can I write it twice, once with IPv4 addr, once with IPv6 addr (ideally once with a hostname that gets resolved as appropriate). As it can be set on per-server or per-location level it would be sufficient for me. Bruno From emailgrant at gmail.com Sat May 18 00:16:29 2013 From: emailgrant at gmail.com (Grant) Date: Fri, 17 May 2013 17:16:29 -0700 Subject: Permissions check Message-ID: I just updated nginx and was warned about permissions. Are these appropriate: /var/log/nginx: drwxr-x--- root root /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: drwx------ nginx nginx - Grant From emailgrant at gmail.com Sat May 18 01:34:43 2013 From: emailgrant at gmail.com (Grant) Date: Fri, 17 May 2013 18:34:43 -0700 Subject: Permissions check In-Reply-To: References: Message-ID: > I just updated nginx and was warned about permissions. Are these appropriate: > > /var/log/nginx: > drwxr-x--- root root > > /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: > drwx------ nginx nginx > > - Grant Whoops, please make that: /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: drwx------ apache nginx With nginx running as user "apache". - Grant From nginx-forum at nginx.us Sat May 18 01:41:09 2013 From: nginx-forum at nginx.us (akurilin) Date: Fri, 17 May 2013 21:41:09 -0400 Subject: 404s logged in error.log? Message-ID: I was wondering if someone could confirm that requests resulting in a 404 response are by default logged to error.log at error level "error". Is that normal, or is there some piece of configuration I am missing that will stop them from being logged to error.log? I figured 404s would be an un-exceptional event that doesn't require error logging, but perhaps I'm simply not handling that situation correctly in my configuration. Worth double-checking. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239307,239307#msg-239307 From daniel.black at openquery.com Sat May 18 02:06:01 2013 From: daniel.black at openquery.com (Daniel Black) Date: Sat, 18 May 2013 12:06:01 +1000 (EST) Subject: 404s logged in error.log? In-Reply-To: Message-ID: <1380409889.6053.1368842761792.JavaMail.root@zimbra.lentz.com.au> ----- Original Message ----- > I was wondering if someone could confirm that requests resulting in a > 404 > response are by default logged to error.log at error level "error". > > Is that normal, or is there some piece of configuration I am missing > that > will stop them from being logged to error.log? I figured 404s would be > an > un-exceptional event that doesn't require error logging, Correct. > but perhaps > I'm > simply not handling that situation correctly in my configuration. > Worth > double-checking. > 4xx responses are a client errors and don't go in the error.log 5xx are server errors which do go in the error log. From piotr at cloudflare.com Sat May 18 02:10:10 2013 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 17 May 2013 19:10:10 -0700 Subject: 404s logged in error.log? In-Reply-To: References: Message-ID: Hey, > Is that normal, or is there some piece of configuration I am missing that > will stop them from being logged to error.log? I figured 404s would be an > un-exceptional event that doesn't require error logging, but perhaps I'm > simply not handling that situation correctly in my configuration. Worth > double-checking. The default behavior is to log such events. You can change it with the "log_not_found" directive: http://nginx.org/r/log_not_found Best regards, Piotr Sikora From nginx-forum at nginx.us Sat May 18 02:47:24 2013 From: nginx-forum at nginx.us (akurilin) Date: Fri, 17 May 2013 22:47:24 -0400 Subject: 404s logged in error.log? In-Reply-To: References: Message-ID: I might have misread the two answers here, but I get the impression that they're saying the exact opposite of each other. Here's a sample (redacted) error message I'm seeing in error.log when doing a GET on a file that doesn't exist: 2013/05/18 02:21:27 [error] 11619#0: *417 open() "/var/www/mysite/foo.html" failed (2: No such file or directory), client: 123.123.123.123, server: my.server.com, request: "GET /foo.html HTTP/1.1", host: "my.server.com" Just to confirm, should I be seeing the error message above in error.log, or did I misconfigure something? I can see a corresponding 404 being logged in access.log. Thanks again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239307,239310#msg-239310 From daniel.black at openquery.com Sat May 18 03:23:21 2013 From: daniel.black at openquery.com (Daniel Black) Date: Sat, 18 May 2013 13:23:21 +1000 (EST) Subject: 404s logged in error.log? In-Reply-To: Message-ID: <544987218.6057.1368847401478.JavaMail.root@zimbra.lentz.com.au> ----- Original Message ----- > I might have misread the two answers here, but I get the impression > that > they're saying the exact opposite of each other. If you analysed the responses and what you have I suspect you would of realised that I was in error and the information you have and the directive highlighted by Piotr gives you the ability to manipulate the logging to what you want. > Here's a sample > (redacted) > error message I'm seeing in error.log when doing a GET on a file that > doesn't exist: > > 2013/05/18 02:21:27 [error] 11619#0: *417 open() > "/var/www/mysite/foo.html" > failed (2: No such file or directory), client: 123.123.123.123, > server: > my.server.com, request: "GET /foo.html HTTP/1.1", host: > "my.server.com" > > Just to confirm, should I be seeing the error message above in > error.log, or did I misconfigure something? I can see a corresponding 404 being > logged in access.log. A misconfigure assessment depends on what you want. The logs highlight that 404 do appear in the error log though this doesn't need to be the case ( http://nginx.org/en/docs/http/ngx_http_core_module.html#log_not_found ) From nginx-forum at nginx.us Sat May 18 03:51:02 2013 From: nginx-forum at nginx.us (akurilin) Date: Fri, 17 May 2013 23:51:02 -0400 Subject: 404s logged in error.log? In-Reply-To: <544987218.6057.1368847401478.JavaMail.root@zimbra.lentz.com.au> References: <544987218.6057.1368847401478.JavaMail.root@zimbra.lentz.com.au> Message-ID: <510d7e06e61bca6fade3adbc87df447e.NginxMailingListEnglish@forum.nginx.org> Perfect, that clarified it, thank you. I will turn off log_not_found and stick to having 404s just in my access logs, to be on the safe side. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239307,239312#msg-239312 From nginx-forum at nginx.us Sat May 18 12:50:09 2013 From: nginx-forum at nginx.us (spdyg) Date: Sat, 18 May 2013 08:50:09 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: References: Message-ID: <3c089c96f08759844fbe49bd979e804d.NginxMailingListEnglish@forum.nginx.org> Do you have SPDY enabled? I have experienced issues with SPDY, upstream keepalive and proxy_cache together. Ended up having to turn off SPDY. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239314#msg-239314 From nginx-forum at nginx.us Sat May 18 12:52:11 2013 From: nginx-forum at nginx.us (roysmith649) Date: Sat, 18 May 2013 08:52:11 -0400 Subject: Merging equivalent cache keys? Message-ID: <8b77ee39e1d8a8df393b411a43e29f9e.NginxMailingListEnglish@forum.nginx.org> We've got a route which is used to retrieve multiple objects in parallel. The client does a GET on /api/1/station/multi?id=123&id=456&id=789. We cache these in our nginx config: location ~ /api/[^/]+/station/multi { proxy_pass http://localhost:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache api; proxy_cache_use_stale updating; } The problem is, clients present the ids in random order. For example, one client might ask for id=1&id=2, and another ask for id=2&id=1. Both should return exactly the same response, but map to different cache keys. For two ids, it's not that bad, but many of the calls are for large numbers of ids and the combinatorics quickly spin out of control. Is there any way to rewrite the keys in nginx to canonicalize them? Sorting all the ids in numerical order would do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239315,239315#msg-239315 From nginx-forum at nginx.us Sat May 18 13:02:14 2013 From: nginx-forum at nginx.us (spdyg) Date: Sat, 18 May 2013 09:02:14 -0400 Subject: The truth about gzip_buffers? Message-ID: <14e6cb554daaa2cfd0ca54cb26c1e9a2.NginxMailingListEnglish@forum.nginx.org> Reading the docs on nginx.org and searching around, it seems there's no consensus on how we should configure gzip_buffers. Some guides say that the total buffer needs to be greater than any file you want to gzip, or it will either not gzip the file or truncate it (I'm sure this is not true though!). Other guides suggest arbitrary values. I guess I have a couple of questions to clear up this confusion: 1. What happens if the response is greater than the total gzip buffer. Will it simply keep the upstream connection open longer while it fills, gzips and transmits the buffers multiple times? 2. If your page size is 4k, does that mean for best efficiency shoudl you keep the size of the buffers to 4k, but just increase the total number of buffers? What is the consideration here? 3. Would having larger buffer sizes potentially allow greater compression because each buffer is compressed individually? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239316,239316#msg-239316 From appa at perusio.net Sat May 18 13:45:12 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Sat, 18 May 2013 15:45:12 +0200 Subject: Merging equivalent cache keys? In-Reply-To: <8b77ee39e1d8a8df393b411a43e29f9e.NginxMailingListEnglish@forum.nginx.org> References: <8b77ee39e1d8a8df393b411a43e29f9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Using the embedded Lua module you can add the three IDs as integers and reduce it to a single number. That way you'll get a single number by virtue if the commutativity of real number addition. AFAIK there are no arithmetic operators available on the Nginx config language. ----appa On Sat, May 18, 2013 at 2:52 PM, roysmith649 wrote: > We've got a route which is used to retrieve multiple objects in parallel. > The client does a GET on /api/1/station/multi?id=123&id=456&id=789. We > cache these in our nginx config: > > location ~ /api/[^/]+/station/multi { > proxy_pass http://localhost:8000; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_cache api; > proxy_cache_use_stale updating; > } > > The problem is, clients present the ids in random order. For example, one > client might ask for id=1&id=2, and another ask for id=2&id=1. Both should > return exactly the same response, but map to different cache keys. For two > ids, it's not that bad, but many of the calls are for large numbers of ids > and the combinatorics quickly spin out of control. > > Is there any way to rewrite the keys in nginx to canonicalize them? > Sorting > all the ids in numerical order would do it. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239315,239315#msg-239315 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat May 18 17:31:50 2013 From: nginx-forum at nginx.us (vlad031@binkmail.com) Date: Sat, 18 May 2013 13:31:50 -0400 Subject: valid_referers dynamic hostname Message-ID: <6da1f35e6bd01276e1d2e20c67283168.NginxMailingListEnglish@forum.nginx.org> Sorry for posting here - don't know for sure if it's the right place. I have an issue: 1) I use nginx as reverse proxy, but I don't always know the domain name for which I'm serving, so my setup looks like this: server_name _ $host 0.0.0.0; 2) I try to block invalid referers but when I try to add $host to valid_referers - it doesn't seem to work: valid_referers none blocked server_names $host ~\.google\. ~\.yahoo\. ~\.bing\. ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; How can I make this work? Also please note that I don't know regexp. Kind regards, Vlad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239318,239318#msg-239318 From rva at onvaoo.com Sun May 19 15:51:39 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Sun, 19 May 2013 17:51:39 +0200 Subject: nginx cache opendir failed Message-ID: Hi On a Freebsd 9.1 machine Nginx 1.4.1 opendir() "/var/cache/www/nginx2" failed (13: Permission denied) microcache dir : drwxr-x--x 3 root wheel 512 May 19 17:05 /var/cache drwxr-xr-x 3 www www 512 May 19 17:06 /var/cache/www drwx------ 2 www www 512 May 19 17:06 nginx2 How to correct this ? Thanks. From nginx-forum at nginx.us Sun May 19 16:43:37 2013 From: nginx-forum at nginx.us (epsilon2930) Date: Sun, 19 May 2013 12:43:37 -0400 Subject: [nginx] 1.4.1 + spdy + centos 6 + openssl-1.0.1e (static), firefox 21 ajax requests ssl spdy = segfault Message-ID: <9d67265efaaf2c8cf15e906548ebb2aa.NginxMailingListEnglish@forum.nginx.org> Hello, on one of my servers, nginx suddenly started crashing on some AJAX-heavy pages when accessed via SSL+SPDY. It seems to happen only when Firefox is the client (tested with Firefox 21), latest version of chrome uses SPDY without crashing. uname -a: Linux myserver.com 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux nginx compile flags: CFLAGS="-g -O0" ./configure --with-pcre=/usr/local/src/nginx-1.4.1/pcre-8.32 --sbin-path=/usr/local/sbin --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-http_realip_module --with-http_ssl_module --with-openssl=/usr/local/src/nginx-1.4.1/openssl-1.0.1e --with-http_spdy_module --http-client-body-temp-path=/tmp/nginx_client --http-proxy-temp-path=/tmp/nginx_proxy --http-fastcgi-temp-path=/tmp/nginx_fastcgi --with-http_stub_status_module --with-debug nginx log when crash happens: 2013/05/19 18:05:58 [notice] 26737#0: start worker process 26899 2013/05/19 18:05:58 [notice] 26737#0: signal 29 (SIGIO) received 2013/05/19 18:05:59 [notice] 26737#0: signal 17 (SIGCHLD) received 2013/05/19 18:05:59 [alert] 26737#0: worker process 26897 exited on signal 11 (core dumped) 2013/05/19 18:05:59 [notice] 26737#0: start worker process 26907 2013/05/19 18:05:59 [notice] 26737#0: signal 29 (SIGIO) received 2013/05/19 18:06:00 [notice] 26737#0: signal 17 (SIGCHLD) received 2013/05/19 18:06:00 [alert] 26737#0: worker process 26899 exited on signal 11 (core dumped) 2013/05/19 18:06:00 [notice] 26737#0: start worker process 26909 2013/05/19 18:06:00 [notice] 26737#0: signal 29 (SIGIO) received nginx.conf http://pastebin.com/G9wAgyeh gdb backtrace: # gdb /usr/local/sbin/nginx core.26899 ... snip gpl stuff ... Reading symbols from /usr/local/sbin/nginx...done. [New Thread 26899] Missing separate debuginfo for Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/50/fc20fea18a6f375789f0f86e28f463d50714fd Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libcrypt.so.1 Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /lib64/libz.so.1 Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libfreebl3.so...(no debugging symbols found)...done. Loaded symbols for /lib64/libfreebl3.so Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/libnss_files.so.2 Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 0x0000003455283c56 in __memset_sse2 () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) bt #0 0x0000003455283c56 in __memset_sse2 () from /lib64/libc.so.6 #1 0x0000000000493a67 in ngx_http_spdy_state_data (sc=0x3035ba0, pos=0x37c78f8 "", end=0x37c78f8 "") at src/http/ngx_http_spdy.c:1193 #2 0x0000000000492673 in ngx_http_spdy_state_head (sc=0x3035ba0, pos=0x37c78f8 "", end=0x37c78f8 "") at src/http/ngx_http_spdy.c:699 #3 0x00000000004919e2 in ngx_http_spdy_read_handler (rev=0x7f0318ffe3b8) at src/http/ngx_http_spdy.c:364 #4 0x000000000042ac31 in ngx_event_process_posted (cycle=0x2893a30, posted=0x8d1b68) at src/event/ngx_event_posted.c:40 #5 0x000000000042887c in ngx_process_events_and_timers (cycle=0x2893a30) at src/event/ngx_event.c:276 #6 0x0000000000435ebd in ngx_worker_process_cycle (cycle=0x2893a30, data=0x1) at src/os/unix/ngx_process_cycle.c:807 #7 0x00000000004327ca in ngx_spawn_process (cycle=0x2893a30, proc=0x435cf7 , data=0x1, name=0x609c9b "worker process", respawn=1) at src/os/unix/ngx_process.c:198 #8 0x0000000000435906 in ngx_reap_children (cycle=0x2893a30) at src/os/unix/ngx_process_cycle.c:619 #9 0x00000000004345ed in ngx_master_process_cycle (cycle=0x2893a30) at src/os/unix/ngx_process_cycle.c:180 #10 0x00000000004041b6 in main (argc=3, argv=0x7fffb6c2dbd8) at src/core/nginx.c:412 Server has a Core i3 540 with HT, OS is 64-bit CentOS 6 fully patched (as of date of this message). - kernel log when error occurred: May 19 18:06:00 saruman kernel: nginx[26899]: segfault at 0 ip 0000003455283c56 sp 00007fffb6c2d498 error 6 in libc-2.12.so[3455200000+18a000] The crash is highly reproducible and when it crashes the ip and sp parameters and offsets are always the same. I hope I've posted enough info for devs to fix this, sorry for the long message. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239327,239327#msg-239327 From phq at silentorbit.com Sun May 19 19:50:02 2013 From: phq at silentorbit.com (Peter Hultqvist) Date: Sun, 19 May 2013 21:50:02 +0200 Subject: Unbuffered requests in a reverse proxy? Message-ID: <51992CEA.9040400@silentorbit.com> Hi Can nginx be configured to pass through a reverse proxy request before it has been transmitted completely? My set-up is a public facing nginx server accepting file uploads passed to another server via a reverse proxy configuration. I appears as nginx is reading the entire request before passing it on to the reverse proxy. The back end is written to abort early if the file size is not acceptable depending which account the client is logged in to or abort immediately if not logged in. Is there any buffer setting that can be set to 0 so that the request won't be buffered? From nginx-forum at nginx.us Mon May 20 01:20:57 2013 From: nginx-forum at nginx.us (epsilon2930) Date: Sun, 19 May 2013 21:20:57 -0400 Subject: [nginx] 1.4.1 + spdy + centos 6 + openssl-1.0.1e (static), firefox 21 ajax requests ssl spdy = segfault In-Reply-To: <9d67265efaaf2c8cf15e906548ebb2aa.NginxMailingListEnglish@forum.nginx.org> References: <9d67265efaaf2c8cf15e906548ebb2aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: For closure: I took this to the bug tracker and a patch was posted that resolves this issue: http://trac.nginx.org/nginx/ticket/357 Apologies for newsletter spam. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239327,239331#msg-239331 From nginx-forum at nginx.us Mon May 20 07:29:25 2013 From: nginx-forum at nginx.us (apeman) Date: Mon, 20 May 2013 03:29:25 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <3c089c96f08759844fbe49bd979e804d.NginxMailingListEnglish@forum.nginx.org> References: <3c089c96f08759844fbe49bd979e804d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9188624209ff2037ea13d61cd5680ae5.NginxMailingListEnglish@forum.nginx.org> I never use SPDY. The problem still exists. I don't know how to fix it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239334#msg-239334 From pasik at iki.fi Mon May 20 07:57:03 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Mon, 20 May 2013 10:57:03 +0300 Subject: Unbuffered requests in a reverse proxy? In-Reply-To: <51992CEA.9040400@silentorbit.com> References: <51992CEA.9040400@silentorbit.com> Message-ID: <20130520075703.GJ11427@reaktio.net> On Sun, May 19, 2013 at 09:50:02PM +0200, Peter Hultqvist wrote: > Hi > > Can nginx be configured to pass through a reverse proxy request before > it has been transmitted completely? > > My set-up is a public facing nginx server accepting file uploads passed > to another server via a reverse proxy configuration. > I appears as nginx is reading the entire request before passing it on to > the reverse proxy. > > The back end is written to abort early if the file size is not > acceptable depending which account the client is logged in to or abort > immediately if not logged in. > Is there any buffer setting that can be set to 0 so that the request > won't be buffered? > See the nginx mailinglist archives for a "no_buffer" patch to nginx 1.2.7: http://yaoweibin.cn/patches/nginx-1.2.7-no_buffer-v6.patch You'll find the usage / configuration syntax from the archives. "no_buffer" patch for nginx 1.4.x isn't ready/stable yet. -- Pasi From rva at onvaoo.com Mon May 20 09:10:52 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Mon, 20 May 2013 11:10:52 +0200 Subject: nginx cache opendir failed In-Reply-To: References: Message-ID: <842B23B5-B960-4426-8AC4-5DFD116E0CF1@onvaoo.com> SOLVED: Just chmod 7777 ( 4 seven) /var/cache/www , the parent directory of the nginx cache and voil?, it works. Le 19 mai 2013 ? 17:51, Ronald Van Assche a ?crit : > Hi > > On a Freebsd 9.1 machine > Nginx 1.4.1 > > opendir() "/var/cache/www/nginx2" failed (13: Permission denied) > > microcache dir : > drwxr-x--x 3 root wheel 512 May 19 17:05 /var/cache > drwxr-xr-x 3 www www 512 May 19 17:06 /var/cache/www > drwx------ 2 www www 512 May 19 17:06 nginx2 > > > How to correct this ? > > > Thanks. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Mon May 20 09:19:17 2013 From: steve at greengecko.co.nz (Stev e Holdoway) Date: Mon, 20 May 2013 21:19:17 +1200 Subject: nginx cache opendir failed In-Reply-To: <842B23B5-B960-4426-8AC4-5DFD116E0CF1@onvaoo.com> References: <842B23B5-B960-4426-8AC4-5DFD116E0CF1@onvaoo.com> Message-ID: <5199EA95.3030209@greengecko.co.nz> So, you need to ensure all files can only be deleted by the owner, all files created in the directory are in the same group as the directory and, to the best of my knowledge, setuid is meaningless. Ah, I see that it's a freebsd machine... setgid is the default functionality on that OS. This may work, but a basic understanding of file permissions will produce a more workable solution. Who runs the web server? That's who you need to set access up for. Steve On 20/05/13 21:10, Ronald Van Assche wrote: > SOLVED: > > Just chmod 7777 ( 4 seven) /var/cache/www , the parent directory of the nginx cache and voil?, it works. > > Le 19 mai 2013 ? 17:51, Ronald Van Assche a ?crit : > >> Hi >> >> On a Freebsd 9.1 machine >> Nginx 1.4.1 >> >> opendir() "/var/cache/www/nginx2" failed (13: Permission denied) >> >> microcache dir : >> drwxr-x--x 3 root wheel 512 May 19 17:05 /var/cache >> drwxr-xr-x 3 www www 512 May 19 17:06 /var/cache/www >> drwx------ 2 www www 512 May 19 17:06 nginx2 >> >> >> How to correct this ? >> >> >> Thanks. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From rva at onvaoo.com Mon May 20 09:26:57 2013 From: rva at onvaoo.com (Ronald Van Assche) Date: Mon, 20 May 2013 11:26:57 +0200 Subject: nginx cache opendir failed In-Reply-To: <5199EA95.3030209@greengecko.co.nz> References: <842B23B5-B960-4426-8AC4-5DFD116E0CF1@onvaoo.com> <5199EA95.3030209@greengecko.co.nz> Message-ID: the Web server is run by "www" , and owns the parent directory of the cache. Le 20 mai 2013 ? 11:19, Stev e Holdoway a ?crit : > So, you need to ensure all files can only be deleted by the owner, all files created in the directory are in the same group as the directory and, to the best of my knowledge, setuid is meaningless. > > Ah, I see that it's a freebsd machine... setgid is the default functionality on that OS. > > This may work, but a basic understanding of file permissions will produce a more workable solution. Who runs the web server? That's who you need to set access up for. > > Steve > > > On 20/05/13 21:10, Ronald Van Assche wrote: >> SOLVED: >> >> Just chmod 7777 ( 4 seven) /var/cache/www , the parent directory of the nginx cache and voil?, it works. >> >> Le 19 mai 2013 ? 17:51, Ronald Van Assche a ?crit : >> >>> Hi >>> >>> On a Freebsd 9.1 machine >>> Nginx 1.4.1 >>> >>> opendir() "/var/cache/www/nginx2" failed (13: Permission denied) >>> >>> microcache dir : >>> drwxr-x--x 3 root wheel 512 May 19 17:05 /var/cache >>> drwxr-xr-x 3 www www 512 May 19 17:06 /var/cache/www >>> drwx------ 2 www www 512 May 19 17:06 nginx2 >>> >>> >>> How to correct this ? >>> >>> >>> Thanks. >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon May 20 10:39:05 2013 From: nginx-forum at nginx.us (Sabik2006) Date: Mon, 20 May 2013 06:39:05 -0400 Subject: Nginx balancing mechanism Message-ID: <53d8dfc32599aececdb40ea5f8f54ecc.NginxMailingListEnglish@forum.nginx.org> Hello What balancing mechanism is used in nginx: connections are balanced, or messages in connections are balanced? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239340,239340#msg-239340 From mdounin at mdounin.ru Mon May 20 11:26:28 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 May 2013 15:26:28 +0400 Subject: The truth about gzip_buffers? In-Reply-To: <14e6cb554daaa2cfd0ca54cb26c1e9a2.NginxMailingListEnglish@forum.nginx.org> References: <14e6cb554daaa2cfd0ca54cb26c1e9a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130520112628.GB69760@mdounin.ru> Hello! On Sat, May 18, 2013 at 09:02:14AM -0400, spdyg wrote: > Reading the docs on nginx.org and searching around, it seems there's no > consensus on how we should configure gzip_buffers. Just keep the default? > Some guides say that the total buffer needs to be greater than any file you > want to gzip, or it will either not gzip the file or truncate it (I'm sure > this is not true though!). Other guides suggest arbitrary values. > > I guess I have a couple of questions to clear up this confusion: > > 1. What happens if the response is greater than the total gzip buffer. Will > it simply keep the upstream connection open longer while it fills, gzips and > transmits the buffers multiple times? Rest of the response will be kept till some buffers are sent to the client. Keeping upstream connection open is generally irrelevant - it depends mostly on proxy_buffers/proxy_max_temp_file_size - but might happen in some configurations. > 2. If your page size is 4k, does that mean for best efficiency shoudl you > keep the size of the buffers to 4k, but just increase the total number of > buffers? What is the consideration here? Much like with other buffers in nginx, small buffers means better memory utilization, but larger buffers might result in smaller CPU usage as there are some amount of work done per-buffer. > 3. Would having larger buffer sizes potentially allow greater compression > because each buffer is compressed individually? No. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 20 11:27:26 2013 From: nginx-forum at nginx.us (vlad031@binkmail.com) Date: Mon, 20 May 2013 07:27:26 -0400 Subject: valid_referers dynamic hostname In-Reply-To: <6da1f35e6bd01276e1d2e20c67283168.NginxMailingListEnglish@forum.nginx.org> References: <6da1f35e6bd01276e1d2e20c67283168.NginxMailingListEnglish@forum.nginx.org> Message-ID: <684d82b86760f56c547c194705bab9b3.NginxMailingListEnglish@forum.nginx.org> Also, Isn't this a bug since I have added server_names to valid_referers? And since server_names knows the domain, it should work... Any ideas? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239318,239343#msg-239343 From mdounin at mdounin.ru Mon May 20 13:19:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 May 2013 17:19:05 +0400 Subject: valid_referers dynamic hostname In-Reply-To: <6da1f35e6bd01276e1d2e20c67283168.NginxMailingListEnglish@forum.nginx.org> References: <6da1f35e6bd01276e1d2e20c67283168.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130520131905.GD69760@mdounin.ru> Hello! On Sat, May 18, 2013 at 01:31:50PM -0400, vlad031 at binkmail.com wrote: > Sorry for posting here - don't know for sure if it's the right place. > > I have an issue: > > 1) I use nginx as reverse proxy, but I don't always know the domain name for > which I'm serving, so my setup looks like this: > > server_name _ $host 0.0.0.0; The "$host" string here means exactly "$host". There is no variable expansion for server_name (expect for a special name "$hostname", which isn't actually a variable but a special name). Most likely requests are handled in the sever{} block in question as it's used as a default server. > 2) I try to block invalid referers but when I try to add $host to > valid_referers - it doesn't seem to work: > > valid_referers none blocked server_names $host ~\.google\. ~\.yahoo\. > ~\.bing\. ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; The valid_referers directive doesn't support variables. > How can I make this work? > Also please note that I don't know regexp. What you are trying to do, i.e. allow referers which match Host header in a request, currently can't be done using the referers module only. With a litle help from the rewrite module it's possible though. Something like this should work: valid_referers none blocked server_names ~\.google\. ...; set $temp "$host:$http_referer"; if ($temp ~* "^(.*):https?://\1") { set $invalid_referer "0"; } if ($invalid_referer) { return 403; } -- Maxim Dounin http://nginx.org/en/donation.html From stageline at gmail.com Mon May 20 16:02:34 2013 From: stageline at gmail.com (suttya) Date: Mon, 20 May 2013 18:02:34 +0200 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <9188624209ff2037ea13d61cd5680ae5.NginxMailingListEnglish@forum.nginx.org> References: <3c089c96f08759844fbe49bd979e804d.NginxMailingListEnglish@forum.nginx.org> <9188624209ff2037ea13d61cd5680ae5.NginxMailingListEnglish@forum.nginx.org> Message-ID: it's GET request? 2013/5/20 apeman > I never use SPDY. The problem still exists. I don't know how to fix it. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,205826,239334#msg-239334 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 20 18:14:02 2013 From: nginx-forum at nginx.us (vlad031@binkmail.com) Date: Mon, 20 May 2013 14:14:02 -0400 Subject: valid_referers dynamic hostname In-Reply-To: <20130520131905.GD69760@mdounin.ru> References: <20130520131905.GD69760@mdounin.ru> Message-ID: <67355d3c78fd667171a1253fc7f442e3.NginxMailingListEnglish@forum.nginx.org> Hello, Thank you for your example Maxim. This is what I've wrote in my config: set $temp "$host:$http_referer"; valid_referers none blocked server_names ~\.google\. ~\.yahoo\. ~\.bing\. ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; if ($invalid_referer){ set $test A ; } if ($temp ~* "^(.*):http?://\1") { set $test "${test}B"; } if ($temp ~* "^(.*):https?://\1") { set $test "${test}C"; } if ($test = ABC) { return 444 ; } It is always returning 444 ... what am I doing wrong?! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239318,239352#msg-239352 From reallfqq-nginx at yahoo.fr Mon May 20 18:26:24 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 20 May 2013 14:26:24 -0400 Subject: valid_referers dynamic hostname In-Reply-To: <67355d3c78fd667171a1253fc7f442e3.NginxMailingListEnglish@forum.nginx.org> References: <20130520131905.GD69760@mdounin.ru> <67355d3c78fd667171a1253fc7f442e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: I suggest you take a look at the order in which 'if' statements are evaluated. Consider reading the 'if' directive documentation . --- *B. R.* On Mon, May 20, 2013 at 2:14 PM, vlad031 at binkmail.com wrote: > Hello, > > Thank you for your example Maxim. This is what I've wrote in my config: > > set $temp "$host:$http_referer"; > > valid_referers none blocked server_names ~\.google\. ~\.yahoo\. ~\.bing\. > ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; > > if ($invalid_referer){ > set $test A ; > } > > if ($temp ~* "^(.*):http?://\1") { > set $test "${test}B"; > } > > if ($temp ~* "^(.*):https?://\1") { > set $test "${test}C"; > } > > if ($test = ABC) { > return 444 ; > } > > It is always returning 444 ... what am I doing wrong?! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239318,239352#msg-239352 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 20 18:34:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 May 2013 22:34:01 +0400 Subject: valid_referers dynamic hostname In-Reply-To: <67355d3c78fd667171a1253fc7f442e3.NginxMailingListEnglish@forum.nginx.org> References: <20130520131905.GD69760@mdounin.ru> <67355d3c78fd667171a1253fc7f442e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130520183401.GJ69760@mdounin.ru> Hello! On Mon, May 20, 2013 at 02:14:02PM -0400, vlad031 at binkmail.com wrote: > Hello, > > Thank you for your example Maxim. This is what I've wrote in my config: > > set $temp "$host:$http_referer"; > > valid_referers none blocked server_names ~\.google\. ~\.yahoo\. ~\.bing\. > ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; > > if ($invalid_referer){ > set $test A ; > } > > if ($temp ~* "^(.*):http?://\1") { > set $test "${test}B"; > } Just a side note: this statement isn't needed. Both http and https schemes are allowed by a "https?" in the regular expression I provided, "?" makes preceeding character option. > > if ($temp ~* "^(.*):https?://\1") { > set $test "${test}C"; > } > > if ($test = ABC) { > return 444 ; > } > > It is always returning 444 ... what am I doing wrong?! You probably mean to write if ($test = A) { return 444; } instead, as your initial message suggests you want to allow requests where Referer matches Host. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon May 20 21:11:07 2013 From: nginx-forum at nginx.us (peku33) Date: Mon, 20 May 2013 17:11:07 -0400 Subject: Fancyindex module hangs connection In-Reply-To: References: Message-ID: Thanks! I found solution myself: http://serverfault.com/questions/507375/nginx-not-finishing-fancyindex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239168,239355#msg-239355 From ramesh1987 at gmail.com Mon May 20 21:20:56 2013 From: ramesh1987 at gmail.com (Ramesh Muraleedharan) Date: Mon, 20 May 2013 14:20:56 -0700 Subject: Can't log/print in header_filter_by_lua Message-ID: Hi all, I've been experimenting with rewriting 'Set Cookie' headers in a nginx-reverse-proxy effort. The Set-Cookie rewrite doesn't seem to work yet, and more importantly, my log/print statements don't print to error_log as directed, making it very difficult to debug. http { server { access_log /home/bhedia/access.log; #error_log /home/bhedia/errors.log debug; error_log /home/bhedia/errors.log notice; listen 80; root /usr/share/nginx/www; #index index.html index.htm; # Make site accessible from http://localhost:8080/ server_name localhost; location / { proxy_pass http://10.45.17.85:50088/; proxy_set_header Host booga.booga.com; #proxy_cookie_domain test-sites.com booga.booga.com; header_filter_by_lua ' ngx.log(ngx.NOTICE, "hello world") local cookies = ngx.header.set_cookie if not cookies then return end if type(cookies) ~= "table" then cookies = {cookies} end local newcookies = {} for i, val in ipairs(cookies) do local newval = string.gsub(val, "([dD]omain)=[%w_-\\\\.-]+", "%1=booga.booga.com") ngx.print(val) ngx.print(newval) table.insert(newcookies, newval) end ngx.header.set_cookie = newcookies '; } } } Any help would be appreciated. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaderhs5 at gmail.com Mon May 20 21:42:05 2013 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Mon, 20 May 2013 18:42:05 -0300 Subject: Can't log/print in header_filter_by_lua In-Reply-To: References: Message-ID: *ngx.print()* is only valid for r*ewrite_by_lua*, access_by_lua* and content_by_lua*.* Maybe your looking for* print() ngx.log* should work though*. * * * 2013/5/20 Ramesh Muraleedharan > Hi all, > > I've been experimenting with rewriting 'Set Cookie' headers in a > nginx-reverse-proxy effort. > > The Set-Cookie rewrite doesn't seem to work yet, and more importantly, my > log/print statements don't print to error_log as directed, making it very > difficult to debug. > > http { > > server { > access_log /home/bhedia/access.log; > #error_log /home/bhedia/errors.log debug; > error_log /home/bhedia/errors.log notice; > > listen 80; > > root /usr/share/nginx/www; > #index index.html index.htm; > > # Make site accessible from http://localhost:8080/ > server_name localhost; > > location / { > > proxy_pass http://10.45.17.85:50088/; > proxy_set_header Host booga.booga.com; > #proxy_cookie_domain test-sites.com booga.booga.com; > > header_filter_by_lua ' > ngx.log(ngx.NOTICE, "hello world") > local cookies = ngx.header.set_cookie > if not cookies then return end > if type(cookies) ~= "table" then cookies = {cookies} end > local newcookies = {} > for i, val in ipairs(cookies) do > local newval = string.gsub(val, > "([dD]omain)=[%w_-\\\\.-]+", > "%1=booga.booga.com") > ngx.print(val) > ngx.print(newval) > table.insert(newcookies, newval) > end > ngx.header.set_cookie = newcookies > '; > } > } > } > > Any help would be appreciated. > > Thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramesh1987 at gmail.com Mon May 20 21:47:41 2013 From: ramesh1987 at gmail.com (Ramesh Muraleedharan) Date: Mon, 20 May 2013 14:47:41 -0700 Subject: Can't log/print in header_filter_by_lua In-Reply-To: References: Message-ID: I've tried print() as well, and neither that nor ngx.log() have worked as yet. On Mon, May 20, 2013 at 2:42 PM, Jader H. Silva wrote: > *ngx.print()* is only valid for r*ewrite_by_lua*, access_by_lua* and > content_by_lua*.* Maybe your looking for* print() > > ngx.log* should work though*. > * > > * > * > > > 2013/5/20 Ramesh Muraleedharan > >> Hi all, >> >> I've been experimenting with rewriting 'Set Cookie' headers in a >> nginx-reverse-proxy effort. >> >> The Set-Cookie rewrite doesn't seem to work yet, and more importantly, my >> log/print statements don't print to error_log as directed, making it very >> difficult to debug. >> >> http { >> >> server { >> access_log /home/bhedia/access.log; >> #error_log /home/bhedia/errors.log debug; >> error_log /home/bhedia/errors.log notice; >> >> listen 80; >> >> root /usr/share/nginx/www; >> #index index.html index.htm; >> >> # Make site accessible from http://localhost:8080/ >> server_name localhost; >> >> location / { >> >> proxy_pass http://10.45.17.85:50088/; >> proxy_set_header Host booga.booga.com; >> #proxy_cookie_domain test-sites.com booga.booga.com; >> >> header_filter_by_lua ' >> ngx.log(ngx.NOTICE, "hello world") >> local cookies = ngx.header.set_cookie >> if not cookies then return end >> if type(cookies) ~= "table" then cookies = {cookies} end >> local newcookies = {} >> for i, val in ipairs(cookies) do >> local newval = string.gsub(val, >> "([dD]omain)=[%w_-\\\\.-]+", >> "%1=booga.booga.com") >> ngx.print(val) >> ngx.print(newval) >> table.insert(newcookies, newval) >> end >> ngx.header.set_cookie = newcookies >> '; >> } >> } >> } >> >> Any help would be appreciated. >> >> Thanks! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramesh1987 at gmail.com Tue May 21 00:18:02 2013 From: ramesh1987 at gmail.com (Ramesh Muraleedharan) Date: Mon, 20 May 2013 17:18:02 -0700 Subject: Can't log/print in header_filter_by_lua In-Reply-To: References: Message-ID: OK. I was able to resolve it by doing a clean install of nginx. Dunno what the underlying issue was, but my hunch is I was logging to an a different location and the logs I was looking at were the default logs from nginx. Thanks anyway! On Mon, May 20, 2013 at 2:47 PM, Ramesh Muraleedharan wrote: > I've tried print() as well, and neither that nor ngx.log() have worked as > yet. > > > On Mon, May 20, 2013 at 2:42 PM, Jader H. Silva wrote: > >> *ngx.print()* is only valid for r*ewrite_by_lua*, access_by_lua* and >> content_by_lua*.* Maybe your looking for* print() >> >> ngx.log* should work though*. >> * >> >> * >> * >> >> >> 2013/5/20 Ramesh Muraleedharan >> >>> Hi all, >>> >>> I've been experimenting with rewriting 'Set Cookie' headers in a >>> nginx-reverse-proxy effort. >>> >>> The Set-Cookie rewrite doesn't seem to work yet, and more importantly, >>> my log/print statements don't print to error_log as directed, making it >>> very difficult to debug. >>> >>> http { >>> >>> server { >>> access_log /home/bhedia/access.log; >>> #error_log /home/bhedia/errors.log debug; >>> error_log /home/bhedia/errors.log notice; >>> >>> listen 80; >>> >>> root /usr/share/nginx/www; >>> #index index.html index.htm; >>> >>> # Make site accessible from http://localhost:8080/ >>> server_name localhost; >>> >>> location / { >>> >>> proxy_pass http://10.45.17.85:50088/; >>> proxy_set_header Host booga.booga.com; >>> #proxy_cookie_domain test-sites.com booga.booga.com; >>> >>> header_filter_by_lua ' >>> ngx.log(ngx.NOTICE, "hello world") >>> local cookies = ngx.header.set_cookie >>> if not cookies then return end >>> if type(cookies) ~= "table" then cookies = {cookies} end >>> local newcookies = {} >>> for i, val in ipairs(cookies) do >>> local newval = string.gsub(val, >>> "([dD]omain)=[%w_-\\\\.-]+", >>> "%1=booga.booga.com") >>> ngx.print(val) >>> ngx.print(newval) >>> table.insert(newcookies, newval) >>> end >>> ngx.header.set_cookie = newcookies >>> '; >>> } >>> } >>> } >>> >>> Any help would be appreciated. >>> >>> Thanks! >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 21 05:27:40 2013 From: nginx-forum at nginx.us (barntzen) Date: Tue, 21 May 2013 01:27:40 -0400 Subject: Looking to returning a 302 Redirect if a file is above a certain size... Message-ID: <08e20875c96d9ea1b6bfbd800a47e0d0.NginxMailingListEnglish@forum.nginx.org> Hey there, I was looking for some guidance for a project I'm working on. We're looking into CDNs here, and we are specifically looking for a decent origin pull service that also supports denying by referrer to prevent hotlinking. However, as it turns out almost all origin pull services *except* a service we'll call ACF restrict you to 100MB or less. As we're not looking to have bulletproof hotlinking protection, we've determined that the best way to handle it is to serve files through the main CDN (we'll call them FSL), but have our nginx cluster return a 302 redirect if the file the "client" is requesting is over 100MB. I've confirmed with FSL that they will honor and cache that 302, which would then send clients to ACF in as clean a fashion as possible. The long and short is: I'm trying to work out how to make nginx return a 302 if a file is over 100MB, which I'm told the Lua module should be capable of. Seeking some advice on how to do so as I'm hopeless with Lua and my searches turned up nothing. Thanks, ~ Benjamin Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239370,239370#msg-239370 From nginx-forum at nginx.us Tue May 21 10:40:56 2013 From: nginx-forum at nginx.us (apeman) Date: Tue, 21 May 2013 06:40:56 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: References: Message-ID: <883b15380a2aa32919d97b81e44aa70e.NginxMailingListEnglish@forum.nginx.org> I don't use SPDY. My request is POST. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239381#msg-239381 From nginx-forum at nginx.us Tue May 21 12:28:19 2013 From: nginx-forum at nginx.us (pgte) Date: Tue, 21 May 2013 08:28:19 -0400 Subject: streaming request Message-ID: <7714c8d0a17eb8397bb302b3b5c3a361.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to do a streaming request that uses chunked encoding that gets forwarded to a back-end node.js http server. In this case the client does not end the request before it gets a response header from the server. This works well if using node.js standalone, but when fronted by Nginx, nginx does not forward the request to my node process and a minute later returns a 408 to the client. I'm using nginx 1.5.0 with the following configuration: upstream myservername { server 127.0.0.1:8888; } server { listen 80; listen 443 ssl; server_name myservername; ssl_certificate ... ssl_certificate_key ... location / { proxy_buffering off; proxy_http_version 1.1; proxy_pass http://myservername; } } Any clues on how to solve this? TIA! -- Pedro Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239384,239384#msg-239384 From mdounin at mdounin.ru Tue May 21 13:33:29 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 May 2013 17:33:29 +0400 Subject: streaming request In-Reply-To: <7714c8d0a17eb8397bb302b3b5c3a361.NginxMailingListEnglish@forum.nginx.org> References: <7714c8d0a17eb8397bb302b3b5c3a361.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130521133329.GX69760@mdounin.ru> Hello! On Tue, May 21, 2013 at 08:28:19AM -0400, pgte wrote: > Hi, > > I'm trying to do a streaming request that uses chunked encoding that gets > forwarded to a back-end node.js http server. In this case the client does > not end the request before it gets a response header from the server. > This works well if using node.js standalone, but when fronted by Nginx, > nginx does not forward the request to my node process and a minute later > returns a 408 to the client. This doesn't work as nginx insists on reading a request body before the request is passed to a backend server. You may consider switching a protocol to something like WebSocket. See here for instructions how to proxy WebSocket connections though nginx: http://nginx.org/en/docs/http/websocket.html -- Maxim Dounin http://nginx.org/en/donation.html From stageline at gmail.com Tue May 21 15:05:44 2013 From: stageline at gmail.com (suttya) Date: Tue, 21 May 2013 17:05:44 +0200 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <883b15380a2aa32919d97b81e44aa70e.NginxMailingListEnglish@forum.nginx.org> References: <883b15380a2aa32919d97b81e44aa70e.NginxMailingListEnglish@forum.nginx.org> Message-ID: If your request is post, Content-Length: 0 is normal. 2013/5/21 apeman > I don't use SPDY. My request is POST. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,205826,239381#msg-239381 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at twilio.com Tue May 21 19:31:05 2013 From: kevin at twilio.com (Kevin Burke) Date: Tue, 21 May 2013 12:31:05 -0700 Subject: Using http_limit_conn with a custom variable Message-ID: Hi, We're trying to use the limit_conn_zone directive to throttle incoming HTTP requests. We'd like to throttle based on the http basic auth variable ($remote_user), however, we must do processing on this value so the zone does not overflow with illegitimate values. Ideally we'd want to do something like set $safe_remote_user ""; content_by_lua ' -- Some code to filter $remote_user values, simplified to one line here ngx.var.safe_remote_user = $remote_user ' limit_conn_zone $safe_remote_user zone:user 10m; However this runs into a problem that we can only set variables inside of the location context, but limit_conn_zone must be defined in the http context. So, as we understand it we cannot use a variable defined by lua in the limit_conn_zone directive. We were curious if anyone has run into this problem, and if there are workarounds that could help us solve this problem. Thanks, Kevin ---- Kevin Burke | 415-723-4116 | www.twilio.com From nginx-forum at nginx.us Wed May 22 01:56:59 2013 From: nginx-forum at nginx.us (apeman) Date: Tue, 21 May 2013 21:56:59 -0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: References: Message-ID: <0a5f4b39e71a5ab67db7d365d2d64cf9.NginxMailingListEnglish@forum.nginx.org> no, no, no! Normally, it's not 0, but sometimes, the possibility is 0.07%. It's absolutely not normal! [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 607 27 "-" "c" "10.23.130.154" [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 613 27 "-" "c" "10.59.24.83" [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 604 62 "-" "e" "10.197.218.20" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 491 27 "-" "f" "-" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 612 278 "-" "f" "10.165.21.34" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.0" 200 816 27 "-" "c" "10.26.152.101" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 585 27 "-" "c" "10.67.55.190" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 463 211 "-" "f" "-" [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 596 0 "-" "f" "10.181.30.95" You see the last column, the response content-length is 0. And we can't find the record in tomcat log. It seems nginx don't forward the request to tomcat. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205826,239408#msg-239408 From nginx-forum at nginx.us Wed May 22 09:38:49 2013 From: nginx-forum at nginx.us (JackB) Date: Wed, 22 May 2013 05:38:49 -0400 Subject: NGINX SE vs. Open Source Message-ID: <4442250ee93cf6a69d0a44a1c3c33182.NginxMailingListEnglish@forum.nginx.org> Hello, on your Site (http://nginx.com/products.html) you are comparing NGINX SE with the Open Source version. The table states "Basic" and "Enhanced" on many core features of NGINX. The SE/"Enhanced" column offers additional information by hovering with the mouse over the rows. Those additional information are listing well known features from the Open Source version. Since there is no detailed comparison (and _NO HOVER_ on the "Basic" column), can you clarify: - what exactly the differences are on the listed features? - will currently offered features of the Open Source version be moved to the SE version? (means: removed from the Open Source version) - will there be a documentation of how SE features work and what they offer? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239424,239424#msg-239424 From andrew at nginx.com Wed May 22 12:05:40 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 22 May 2013 16:05:40 +0400 Subject: NGINX SE vs. Open Source In-Reply-To: <4442250ee93cf6a69d0a44a1c3c33182.NginxMailingListEnglish@forum.nginx.org> References: <4442250ee93cf6a69d0a44a1c3c33182.NginxMailingListEnglish@forum.nginx.org> Message-ID: <41AEDB33-3F91-45E3-9132-553D027781E5@nginx.com> Hi JackB, On May 22, 2013, at 1:38 PM, JackB wrote: > Hello, > > on your Site (http://nginx.com/products.html) you are comparing NGINX SE > with the Open Source version. The table states "Basic" and "Enhanced" on > many core features of NGINX. The SE/"Enhanced" column offers additional > information by hovering with the mouse over the rows. Those additional > information are listing well known features from the Open Source version. > Since there is no detailed comparison (and _NO HOVER_ on the "Basic" > column), can you clarify: > > - what exactly the differences are on the listed features? There are certain enhancements over load balancing, monitoring and configuration, as well as additional media streaming related capabilities in the commercial subscription. This product is still pending public announcement and the list of features is subject to change. You right with the web site, though. The .com web site still requires a lot of work and will be improved in the short term. > - will currently offered features of the Open Source version be moved to the > SE version? (means: removed from the Open Source version) No, that's not an option, and there're no such plans. Everything that's currently in the OSS version of nginx will remain as OSS, licensed under the 2-clause BSD-like license. Moreover, we keep fixing a lot of things in the OSS nginx (check the CHANGES), as well as adding new (and big) features like OCSP Stapling, WebSocket or SPDY to the OSS nginx. Like it was said before several times (and I'd like to reiterate it once again), we at Nginx, Inc. are fully dedicated to introduce major improvements to the OSS nginx, and that's what we do. Our public roadmap is accessible on http://trac.nginx.org. > - will there be a documentation of how SE features work and what they > offer? Documentation is currently available as part of our commercial subscription. Hope this helps. > Thanks. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239424,239424#msg-239424 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Wed May 22 13:59:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 May 2013 17:59:05 +0400 Subject: Using http_limit_conn with a custom variable In-Reply-To: References: Message-ID: <20130522135905.GF69760@mdounin.ru> Hello! On Tue, May 21, 2013 at 12:31:05PM -0700, Kevin Burke wrote: > Hi, > We're trying to use the limit_conn_zone directive to throttle incoming > HTTP requests. > > We'd like to throttle based on the http basic auth variable > ($remote_user), however, we must do processing on this value so the > zone does not overflow with illegitimate values. Ideally we'd want to > do something like > > set $safe_remote_user ""; > content_by_lua ' > -- Some code to filter $remote_user values, simplified to one line here > ngx.var.safe_remote_user = $remote_user > ' > limit_conn_zone $safe_remote_user zone:user 10m; > > However this runs into a problem that we can only set variables inside > of the location context, but limit_conn_zone must be defined in the > http context. So, as we understand it we cannot use a variable defined > by lua in the limit_conn_zone directive. We were curious if anyone has > run into this problem, and if there are workarounds that could help us > solve this problem. For variables processing independant on a particular request handling point there is the map{} and perl_set directives in nginx (see http://nginx.org/r/map, http://nginx.org/r/perl_set). Not sure if there is something similar in lua module, but map should be enough for a particula task. With map you may do something like this: map $remote_user $limit { default invalid; ~^[a-z0-9]+$ $remote_user; } This way only valid (according to a regex) user names are mapped to their own limits, while everything else maps to predefined value "invalid". -- Maxim Dounin http://nginx.org/en/donation.html From kevin at twilio.com Wed May 22 14:35:07 2013 From: kevin at twilio.com (Kevin Burke) Date: Wed, 22 May 2013 07:35:07 -0700 Subject: Using http_limit_conn with a custom variable In-Reply-To: <20130522135905.GF69760@mdounin.ru> References: <20130522135905.GF69760@mdounin.ru> Message-ID: Ah, thanks, map{} is probably the best solution. We got it "working" by using rewrite_by_lua_file, which let us set new headers: # Set an HTTP header that is read by conn_zone rewrite_by_lua_file user.lua; limit_conn_zone $http_user_binary ...; Then in the lua file, add something like: ngx.set_header('User-Binary', ngx.md5_bin($remote_user)); This is almost certainly not an ideal thing to do, we'll look to rewrite it using map. Thanks, Kevin ---- Kevin Burke | 415-723-4116 | www.twilio.com On Wed, May 22, 2013 at 6:59 AM, Maxim Dounin wrote: > Hello! > > On Tue, May 21, 2013 at 12:31:05PM -0700, Kevin Burke wrote: > > > Hi, > > We're trying to use the limit_conn_zone directive to throttle incoming > > HTTP requests. > > > > We'd like to throttle based on the http basic auth variable > > ($remote_user), however, we must do processing on this value so the > > zone does not overflow with illegitimate values. Ideally we'd want to > > do something like > > > > set $safe_remote_user ""; > > content_by_lua ' > > -- Some code to filter $remote_user values, simplified to one line here > > ngx.var.safe_remote_user = $remote_user > > ' > > limit_conn_zone $safe_remote_user zone:user 10m; > > > > However this runs into a problem that we can only set variables inside > > of the location context, but limit_conn_zone must be defined in the > > http context. So, as we understand it we cannot use a variable defined > > by lua in the limit_conn_zone directive. We were curious if anyone has > > run into this problem, and if there are workarounds that could help us > > solve this problem. > > For variables processing independant on a particular request > handling point there is the map{} and perl_set directives in > nginx (see http://nginx.org/r/map, http://nginx.org/r/perl_set). > > Not sure if there is something similar in lua module, but map > should be enough for a particula task. > > With map you may do something like this: > > map $remote_user $limit { > default invalid; > ~^[a-z0-9]+$ $remote_user; > } > > This way only valid (according to a regex) user names are mapped > to their own limits, while everything else maps to predefined > value "invalid". > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 22 15:20:34 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 May 2013 19:20:34 +0400 Subject: Nginx returns HTTP 200 with Content-Length: 0 In-Reply-To: <0a5f4b39e71a5ab67db7d365d2d64cf9.NginxMailingListEnglish@forum.nginx.org> References: <0a5f4b39e71a5ab67db7d365d2d64cf9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130522152034.GJ69760@mdounin.ru> Hello! On Tue, May 21, 2013 at 09:56:59PM -0400, apeman wrote: > no, no, no! > Normally, it's not 0, but sometimes, the possibility is 0.07%. It's > absolutely not normal! > [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 607 27 "-" > "c" "10.23.130.154" > [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 613 27 "-" > "c" "10.59.24.83" > [21/May/2013:00:02:56 +0800] "POST /ourservice/api HTTP/1.1" 200 604 62 "-" > "e" "10.197.218.20" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 491 27 "-" > "f" "-" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 612 278 "-" > "f" "10.165.21.34" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.0" 200 816 27 "-" > "c" "10.26.152.101" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 585 27 "-" > "c" "10.67.55.190" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 463 211 "-" > "f" "-" > [21/May/2013:00:02:57 +0800] "POST /ourservice/api HTTP/1.1" 200 596 0 "-" > "f" "10.181.30.95" > You see the last column, the response content-length is 0. And we can't find > the record in tomcat log. It seems nginx don't forward the request to > tomcat. You may want to capture debug log to see what goes on, see http://nginx.org/en/docs/debugging_log.html for details. See also http://wiki.nginx.org/Debugging#Asking_for_help. -- Maxim Dounin http://nginx.org/en/donation.html From robertof.public at gmail.com Wed May 22 18:40:43 2013 From: robertof.public at gmail.com (Roberto F.) Date: Wed, 22 May 2013 20:40:43 +0200 Subject: Problem with rewrite regexes when the URL contains a trailing slash (nginx-1.5.0) Message-ID: <519D112B.7030300@gmail.com> Hello. I'm having a problem with nginx's rewrite directive. Basically, when the URL contains a trailing point, it is ignored by the rewrite regexp. Let's do an example: I load http://uri/something. (with the trailing point). Then, with the rewrite rule: rewrite ^/something\.$ /index.html I should see 'index.html', but instead that appears in the logfile (with rewrite_log set at on): 2013/05/22 20:36:07 [notice] 6256#1440: *57 "^/something\.$" does not match "/something", client: 127.0.0.1, server: localhost, request: "GET /something. HTTP/1.1", host: "localhost" As you can see, the trailing point is missing from the "does not match" part of the log. Is there any workaround for that? Thank you in advance, Robertof -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 22 19:12:40 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 May 2013 23:12:40 +0400 Subject: Problem with rewrite regexes when the URL contains a trailing slash (nginx-1.5.0) In-Reply-To: <519D112B.7030300@gmail.com> References: <519D112B.7030300@gmail.com> Message-ID: <20130522191240.GT69760@mdounin.ru> Hello! On Wed, May 22, 2013 at 08:40:43PM +0200, Roberto F. wrote: > Hello. > I'm having a problem with nginx's rewrite directive. > Basically, when the URL contains a trailing point, it is ignored by > the rewrite regexp. > Let's do an example: > I load http://uri/something. (with the trailing point). Then, with > the rewrite rule: > rewrite ^/something\.$ /index.html > I should see 'index.html', but instead that appears in the logfile > (with rewrite_log set at on): > 2013/05/22 20:36:07 [notice] 6256#1440: *57 "^/something\.$" does > not match "/something", client: 127.0.0.1, server: localhost, > request: "GET /something. HTTP/1.1", host: "localhost" > As you can see, the trailing point is missing from the "does not > match" part of the log. > > Is there any workaround for that? Are you using nginx/Windows? On Windows trailing dots and spaces in URIs are ignored as they aren't significant from filesystem point of view and otherwise can be used to bypass access restrictions. -- Maxim Dounin http://nginx.org/en/donation.html From robertof.public at gmail.com Wed May 22 19:17:20 2013 From: robertof.public at gmail.com (Roberto F.) Date: Wed, 22 May 2013 21:17:20 +0200 Subject: Problem with rewrite regexes when the URL contains a trailing slash (nginx-1.5.0) In-Reply-To: <20130522191240.GT69760@mdounin.ru> References: <519D112B.7030300@gmail.com> <20130522191240.GT69760@mdounin.ru> Message-ID: <519D19C0.2010801@gmail.com> Hi, thanks for the fast answer! Yes, I am using nginx on Windows and I didn't know about that restriction. Since the project I'm working on requires that (damned!) final point, do you know a way to overcome this restriction? For example, I thought about cygwin but I'm not sure if it would work. Thanks again, Robertof Il 22/05/2013 21:12, Maxim Dounin ha scritto: > Hello! > > On Wed, May 22, 2013 at 08:40:43PM +0200, Roberto F. wrote: > >> Hello. >> I'm having a problem with nginx's rewrite directive. >> Basically, when the URL contains a trailing point, it is ignored by >> the rewrite regexp. >> Let's do an example: >> I load http://uri/something. (with the trailing point). Then, with >> the rewrite rule: >> rewrite ^/something\.$ /index.html >> I should see 'index.html', but instead that appears in the logfile >> (with rewrite_log set at on): >> 2013/05/22 20:36:07 [notice] 6256#1440: *57 "^/something\.$" does >> not match "/something", client: 127.0.0.1, server: localhost, >> request: "GET /something. HTTP/1.1", host: "localhost" >> As you can see, the trailing point is missing from the "does not >> match" part of the log. >> >> Is there any workaround for that? > Are you using nginx/Windows? On Windows trailing dots and spaces > in URIs are ignored as they aren't significant from filesystem > point of view and otherwise can be used to bypass access > restrictions. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertof.public at gmail.com Wed May 22 21:15:58 2013 From: robertof.public at gmail.com (Roberto F.) Date: Wed, 22 May 2013 23:15:58 +0200 Subject: Problem with rewrite regexes when the URL contains a trailing slash (nginx-1.5.0) In-Reply-To: <20130522191240.GT69760@mdounin.ru> References: <519D112B.7030300@gmail.com> <20130522191240.GT69760@mdounin.ru> Message-ID: <519D358E.8050301@gmail.com> Hello -- just writing again to say that I did it! I found on the web (just Google) a Cygwin build of nginx and it works perfectly. Thanks for your support! Robertof Il 22/05/2013 21:12, Maxim Dounin ha scritto: > Hello! > > On Wed, May 22, 2013 at 08:40:43PM +0200, Roberto F. wrote: > >> Hello. >> I'm having a problem with nginx's rewrite directive. >> Basically, when the URL contains a trailing point, it is ignored by >> the rewrite regexp. >> Let's do an example: >> I load http://uri/something. (with the trailing point). Then, with >> the rewrite rule: >> rewrite ^/something\.$ /index.html >> I should see 'index.html', but instead that appears in the logfile >> (with rewrite_log set at on): >> 2013/05/22 20:36:07 [notice] 6256#1440: *57 "^/something\.$" does >> not match "/something", client: 127.0.0.1, server: localhost, >> request: "GET /something. HTTP/1.1", host: "localhost" >> As you can see, the trailing point is missing from the "does not >> match" part of the log. >> >> Is there any workaround for that? > Are you using nginx/Windows? On Windows trailing dots and spaces > in URIs are ignored as they aren't significant from filesystem > point of view and otherwise can be used to bypass access > restrictions. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 23 15:07:28 2013 From: nginx-forum at nginx.us (andrea.mandolo) Date: Thu, 23 May 2013 11:07:28 -0400 Subject: Override Content-Type header with proxied requests Message-ID: Hi !! i have a Nginx server that operates as a reverse proxy to a my bucket in Amazon S3. Amazon S3 service could deliver contents with wrong Content-Type header, so i would like to override this header by referring to file extension. In other servers i have just configured the "types" block with all mime types mapped with file estensions, but this approach only works when Nginx delivers contents directly (as a origin server). If the server is a reverse proxy, doesn't add a new Content-Type header, but honors Content-Type (if exists) received by the origin. Is it possible to override the content-type response header using "types" block? Is there any best practice to override content-type header by file extensions? Is "map" suggested for this purpose or using multiple "location" block is better? Thanks in advance!! --- Andrea Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239473,239473#msg-239473 From contact at jpluscplusm.com Thu May 23 15:11:33 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 23 May 2013 16:11:33 +0100 Subject: Override Content-Type header with proxied requests In-Reply-To: References: Message-ID: Here's a thread I started on this a while back. I didn't get any replies. http://mailman.nginx.org/pipermail/nginx/2011-September/029344.html Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From ryanchan404 at gmail.com Thu May 23 15:17:24 2013 From: ryanchan404 at gmail.com (Ryan Chan) Date: Thu, 23 May 2013 23:17:24 +0800 Subject: How to tell nginx to check dead backend in upstream adaptively? Message-ID: Hi, Currently nginx will check every fail_timeout (default=10s) which is too frequent when the backend server dead completely that require manually replacement and it might take long time. Definitely we can mark it as down in the config, but that is not an automatic solution. Is it possible to config nginx that it check adaptively? e.g. start from 10s, then 20s, 40s, 60s etc? Or are there any better approach? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Thu May 23 15:24:07 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 23 May 2013 17:24:07 +0200 Subject: Override Content-Type header with proxied requests In-Reply-To: References: Message-ID: Try this. At the http level define a map directive that maps upstream Content-Types to the correct ones. map $upstream_content_type $s3_content_type { # S3 -> real... } On the location that proxy passes. proxy_hide_header Content-Type; add_header Content-Type $s3_content_type; ----appa On Thu, May 23, 2013 at 5:07 PM, andrea.mandolo wrote: > Hi !! > > i have a Nginx server that operates as a reverse proxy to a my bucket in > Amazon S3. > > Amazon S3 service could deliver contents with wrong Content-Type header, > so i would like to override this header by referring to file extension. > > In other servers i have just configured the "types" block with all mime > types mapped with file estensions, > but this approach only works when Nginx delivers contents directly (as a > origin server). > If the server is a reverse proxy, doesn't add a new Content-Type header, > but > honors Content-Type (if exists) received by the origin. > > Is it possible to override the content-type response header using "types" > block? Is there any best practice to override content-type header by file > extensions? Is "map" suggested for this purpose or using multiple > "location" > block is better? > > Thanks in advance!! > --- > Andrea > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239473,239473#msg-239473 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Thu May 23 15:33:41 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 23 May 2013 17:33:41 +0200 Subject: Override Content-Type header with proxied requests In-Reply-To: References: Message-ID: Oops it's $upstream_http_content_type instead. Should read: map $upstream_http_content_type $s3_content_type { # S3 -> real... } ----appa On Thu, May 23, 2013 at 5:24 PM, Ant?nio P. P. Almeida wrote: > Try this. > > At the http level define a map directive that maps upstream Content-Types > to the correct ones. > > map $upstream_content_type $s3_content_type { > # S3 -> real... > } > > On the location that proxy passes. > > proxy_hide_header Content-Type; > add_header Content-Type $s3_content_type; > > ----appa > > > > On Thu, May 23, 2013 at 5:07 PM, andrea.mandolo wrote: > >> Hi !! >> >> i have a Nginx server that operates as a reverse proxy to a my bucket in >> Amazon S3. >> >> Amazon S3 service could deliver contents with wrong Content-Type header, >> so i would like to override this header by referring to file extension. >> >> In other servers i have just configured the "types" block with all mime >> types mapped with file estensions, >> but this approach only works when Nginx delivers contents directly (as a >> origin server). >> If the server is a reverse proxy, doesn't add a new Content-Type header, >> but >> honors Content-Type (if exists) received by the origin. >> >> Is it possible to override the content-type response header using "types" >> block? Is there any best practice to override content-type header by file >> extensions? Is "map" suggested for this purpose or using multiple >> "location" >> block is better? >> >> Thanks in advance!! >> --- >> Andrea >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,239473,239473#msg-239473 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 23 15:43:40 2013 From: nginx-forum at nginx.us (artemg) Date: Thu, 23 May 2013 11:43:40 -0400 Subject: Updated cross-reference (LXR) In-Reply-To: References: Message-ID: If somebody wants cross reference for other versions, here you can find for almost all versions on nginx http://osxr.org/ngx/source Posted at Nginx Forum: http://forum.nginx.org/read.php?2,220726,239479#msg-239479 From pasik at iki.fi Thu May 23 18:13:11 2013 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 23 May 2013 21:13:11 +0300 Subject: streaming request In-Reply-To: <20130521133329.GX69760@mdounin.ru> References: <7714c8d0a17eb8397bb302b3b5c3a361.NginxMailingListEnglish@forum.nginx.org> <20130521133329.GX69760@mdounin.ru> Message-ID: <20130523181311.GA11427@reaktio.net> On Tue, May 21, 2013 at 05:33:29PM +0400, Maxim Dounin wrote: > Hello! > > On Tue, May 21, 2013 at 08:28:19AM -0400, pgte wrote: > > > Hi, > > > > I'm trying to do a streaming request that uses chunked encoding that gets > > forwarded to a back-end node.js http server. In this case the client does > > not end the request before it gets a response header from the server. > > This works well if using node.js standalone, but when fronted by Nginx, > > nginx does not forward the request to my node process and a minute later > > returns a 408 to the client. > > This doesn't work as nginx insists on reading a request body > before the request is passed to a backend server. > People are asking for "no buffer" feature very often, it's clearly something nginx is missing and should have. I know Weibin has been working on a patch to do exactly that ("no_buffer" patch). Currently it's available and stable for nginx 1.2.x, but the 1.4 version is still work-in-progress. -- Pasi From jan.teske at gmail.com Thu May 23 18:47:42 2013 From: jan.teske at gmail.com (Jan Teske) Date: Thu, 23 May 2013 20:47:42 +0200 Subject: NGINX error log format documentation Message-ID: <519E644E.1060302@gmail.com> Hey! I want to parse NGINX error logs. However, I did not find any documentation concerning the used log format. While the meaning of some fields like the data is pretty obvious, for some it is not at all. In addition, I cannot be sure that my parser is complete if I do not have a documentation of all the possible fields. Since it seems you can change the access log format, but not that of the error log, I really have no idea how to get the information I need. Is there such documentation? I also posted this question on StackOverflow: http://stackoverflow.com/questions/16711573/nginx-error-log-format-documentation/16711684 Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 23 19:31:39 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 23 May 2013 20:31:39 +0100 Subject: NGINX error log format documentation In-Reply-To: <519E644E.1060302@gmail.com> References: <519E644E.1060302@gmail.com> Message-ID: <20130523193139.GD27406@craic.sysops.org> On Thu, May 23, 2013 at 08:47:42PM +0200, Jan Teske wrote: Hi there, > I want to parse NGINX error logs. However, I did not find any > documentation concerning the used log format. less src/core/ngx_log.c should probably show most of what you need. Combine that with grep -r -A 2 ngx_log_error src and looking at an error log and you should see that it is "a small number (five, I think) of fixed fields, followed by free-form text". > While the meaning of some > fields like the data is pretty obvious, for some it is not at all. In > addition, I cannot be sure that my parser is complete if I do not have a > documentation of all the possible fields. Since it seems you can change > the access log format, but not that of the error log, I really have no > idea how to get the information I need. I (strongly) suspect that the error log line format details is not something that nginx wants to commit to holding stable. If you want to do any more parsing than "free-form text after a handful of common fields", then you'll probably want to care about the version you are using. Or at least, flag an imperfectly-recognised line if anything doesn't match what you expect. > Is there such documentation? It's hard to beat the contents of src/ for accuracy. Choose the "identifying" string in the line you care about, find the matching ngx_error_log call, and then see what free-form text it provides in the current version. f -- Francis Daly francis at daoine.org From artemrts at ukr.net Thu May 23 20:11:49 2013 From: artemrts at ukr.net (wishmaster) Date: Thu, 23 May 2013 23:11:49 +0300 Subject: Rewriting Message-ID: <47065.1369339909.15057004780639354880@ffe16.ukr.net> Hi, I use opencart with nginx+php-fpm. Sometimes it is necessary to redirect all clients, except admin (190.212.201.0/24), to "Service unavailable" page which is simple index.html file with logo, background image and some text, located in /unav directory. Below some of nginx.conf location / { if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { rewrite ^/(.*)$ /unav/$1 break; return 403; } try_files $uri $uri/ @opencart; } location ^~ /unav { } location @opencart { rewrite ^/(.+)$ /index.php?_route_=$1 last; } [...skipped...] location ~ \.php$ { try_files $uri =404; fastcgi_read_timeout 60s; fastcgi_send_timeout 60s; include myphp-fpm.conf; } Problem is in rewriting. This rule rewrite ^/(.*)$ /unav/$1 break; rewrite ONLY http://mysite.com/ request but in case http://mysite.com/index.php rewrite is none and request processed by location ~ \.php$ rule. Why?? Thanks! From quintessence at bulinfo.net Thu May 23 21:47:15 2013 From: quintessence at bulinfo.net (Bozhidara Marinchovska) Date: Fri, 24 May 2013 00:47:15 +0300 Subject: Rewriting In-Reply-To: <47065.1369339909.15057004780639354880@ffe16.ukr.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> Message-ID: <519E8E63.50409@bulinfo.net> Hello, http://nginx.org/en/docs/http/request_processing.html Because in this case you need to place your rule in server block if you would like to be valid for all custom defined locations in your config For example: server { listen 80; if ($remote_addr ~ '192.168.1.25') { return 403; } location / { ... } location ~ \.php$ { ... } } On 23.05.2013 23:11 ?., wishmaster wrote: > Hi, > I use opencart with nginx+php-fpm. Sometimes it is necessary to redirect all clients, except admin (190.212.201.0/24), to "Service unavailable" page which is simple index.html file with logo, background image and some text, located in /unav directory. > Below some of nginx.conf > > > location / { > > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > rewrite ^/(.*)$ /unav/$1 break; > return 403; > } > try_files $uri $uri/ @opencart; > } > > location ^~ /unav { > } > > location @opencart { > rewrite ^/(.+)$ /index.php?_route_=$1 last; > } > > [...skipped...] > > location ~ \.php$ { > > try_files $uri =404; > fastcgi_read_timeout 60s; > fastcgi_send_timeout 60s; > include myphp-fpm.conf; > > } > > Problem is in rewriting. > This rule > > rewrite ^/(.*)$ /unav/$1 break; > > rewrite ONLY http://mysite.com/ request but in case http://mysite.com/index.php rewrite is none and request processed by location ~ \.php$ rule. Why?? > > Thanks! > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri May 24 01:07:21 2013 From: nginx-forum at nginx.us (amagad) Date: Thu, 23 May 2013 21:07:21 -0400 Subject: proxy only certain assets based on host header? Message-ID: <3734abceb7315bb9c8347a6200f7be85.NginxMailingListEnglish@forum.nginx.org> We're trying to proxy only certain assets like png|jpg|css only when the host header is a certain DNS name. I tried to do this in the proxy.conf file using something the example below but it doesnt like the if statement. Is there a way to have nginx do what I am looking for? if ($http_host = dnsname.com) { location ~ ^/(stylesheets|images|javascripts|tools|flash|components)/ { proxy_pass http://assethost } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239494,239494#msg-239494 From agentzh at gmail.com Fri May 24 01:20:37 2013 From: agentzh at gmail.com (agentzh) Date: Thu, 23 May 2013 18:20:37 -0700 Subject: [ANN] ngx_openresty devel version 1.2.8.5 released Message-ID: Hello guys! I am happy to announce that the new development version of ngx_openresty, 1.2.8.5, is now released: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this release happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.8.3: * upgraded LuaNginxModule to 0.8.2. * feature: added "ngx.HTTP_MKCOL", "ngx.HTTP_COPY", "ngx.HTTP_MOVE", and other WebDAV request method constants; also added corresponding support to ngx.req.set_method and ngx.location.capture. thanks Adallom Roy for the patch. * feature: allow injecting new user Lua APIs (and overriding existing Lua APIs) in the "ngx" table. * bugfix: ngx.req.set_body_file() always enabled Direct I/O which caused the alert message "fcntl(O_DIRECT) ... Invalid argument" in error logs on file systems lacking the Direct I/O support. thanks Matthieu Tourne for reporting this issue. * bugfix: buffer corruption might happen in ngx.req.set_body_file() when Nginx upstream modules were used later because ngx.req.set_body_file() incorrectly set "r->request_body->buf" to the in-file buffer which could get reused by "ngx_http_upstream" for its own purposes. * bugfix: no longer automatically turn underscores (_) to dashes (-) in header names for ngx.req.set_header and ngx.req.clear_header. thanks aviramc for the report. * bugfix: segmentation fault might happen in nginx 1.4.x when calling ngx.req.set_header on the "Cookie" request headers because recent versions of Nginx no longer always initialize "r->headers_in.cookies". thanks Rob W for reporting this issue. * bugfix: fixed the C compiler warning "argument 'nret' might be clobbered by 'longjmp' or 'vfork'" when compiling with Ubuntu 13.04's gcc 4.7.3. thanks jacky and Rajeev's reports. * bugfix: temporary memory leaks might happen when using ngx.escape_uri, ngx.unescape_uri, ngx.quote_sql_str, ngx.decode_base64, and ngx.encode_base64 in tight Lua loops because we allocated memory in nginx's request memory pool for these methods. * optimize: ngx.escape_uri now runs faster when the input string contains no special bytes to be escaped. * testing: added custom test scaffold t::TestNginxLua which subclasses Test::Nginx::Socket. it supports the environment "TEST_NGINX_INIT_BY_LUA" which can be used to add more custom Lua code to the value of the init_by_lua directive in the Nginx configuration. * upgraded SrcacheNginxModule to 0.21. * bugfix: responses with a status code smaller than all the status codes specified in the srcache_store_statuses directive were not skipped as expected. thanks Lanshun Zhou for the patch. * feature: applied the invalid_referer_hash patch to the Nginx core to make the $invalid_referer variable accessible in embedded dynamic languages like Perl and Lua. thanks Fry-kun for requesting this. * updated the dtrace patch for the Nginx core. * print out more info about the Nginx in-file bufs in the tapset function "ngx_chain_dump". The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002008 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From artemrts at ukr.net Fri May 24 06:01:58 2013 From: artemrts at ukr.net (wishmaster) Date: Fri, 24 May 2013 09:01:58 +0300 Subject: Rewriting In-Reply-To: <519E8E63.50409@bulinfo.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> <519E8E63.50409@bulinfo.net> Message-ID: <43875.1369375318.17986301947901706240@ffe16.ukr.net> Off course, you right. Thanks. if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { rewrite ^/(.*)$ /unav/$1 break; } location / { .... } ........ But in log 2013/05/24 08:49:45 [error] 76017#0: *1910 open() "/usr/local/www/akbmaster/unav/unav/index.html" failed (2: No such file or directory), client: 190.212.201.198, server: akbmaster.server.com, request: "GET / HTTP/1.1", host: "akbmaster.server.com" I see twice rewriting. I have rewritten rule like this and this solved twice rewriting problem. rewrite ^/([^/]*)$ /unav/$1 break; Can you explain me why in my situation nginx have rewritten request twice? Cheers, > Hello, > > http://nginx.org/en/docs/http/request_processing.html > > Because in this case you need to place your rule in server block if you > would like to be valid for all custom defined locations in your config > For example: > > server > { > listen 80; > > if ($remote_addr ~ '192.168.1.25') > { > return 403; > } > > location / > { > ... > } > > location ~ \.php$ > { > ... > } > > } > > On 23.05.2013 23:11 ?., wishmaster wrote: > > Hi, > > I use opencart with nginx+php-fpm. Sometimes it is necessary to redirect all clients, except admin (190.212.201.0/24), to "Service unavailable" page which is simple index.html file with logo, background image and some text, located in /unav directory. > > Below some of nginx.conf > > > > > > location / { > > > > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > > rewrite ^/(.*)$ /unav/$1 break; > > return 403; > > } > > try_files $uri $uri/ @opencart; > > } > > > > location ^~ /unav { > > } > > > > location @opencart { > > rewrite ^/(.+)$ /index.php?_route_=$1 last; > > } > > > > [...skipped...] > > > > location ~ \.php$ { > > > > try_files $uri =404; > > fastcgi_read_timeout 60s; > > fastcgi_send_timeout 60s; > > include myphp-fpm.conf; > > > > } > > > > Problem is in rewriting. > > This rule > > > > rewrite ^/(.*)$ /unav/$1 break; > > > > rewrite ONLY http://mysite.com/ request but in case http://mysite.com/index.php rewrite is none and request processed by location ~ \.php$ rule. Why?? > > > > Thanks! > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx From appa at perusio.net Fri May 24 08:45:02 2013 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Fri, 24 May 2013 10:45:02 +0200 Subject: proxy only certain assets based on host header? In-Reply-To: <3734abceb7315bb9c8347a6200f7be85.NginxMailingListEnglish@forum.nginx.org> References: <3734abceb7315bb9c8347a6200f7be85.NginxMailingListEnglish@forum.nginx.org> Message-ID: location ~ ^/(?:stylesheets|images|javascripts|tools|flash|components)/ { error_page 418 = @proxied_assets; if ($http_host = dnsname.com) { return 418; } # add other directives here if need be... } location @proxied-assets { proxy_pass http://assethost; } ----appa On Fri, May 24, 2013 at 3:07 AM, amagad wrote: > We're trying to proxy only certain assets like png|jpg|css only when the > host header is a certain DNS name. I tried to do this in the proxy.conf > file > using something the example below but it doesnt like the if statement. Is > there a way to have nginx do what I am looking for? > > > if ($http_host = dnsname.com) { > location ~ ^/(stylesheets|images|javascripts|tools|flash|components)/ { > proxy_pass http://assethost > } > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239494,239494#msg-239494 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 24 08:58:49 2013 From: nginx-forum at nginx.us (andrea.mandolo) Date: Fri, 24 May 2013 04:58:49 -0400 Subject: Override Content-Type header with proxied requests In-Reply-To: References: Message-ID: <5c402e58087cfef298d6fb9060c071cc.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply!! This approach can be a good solution. I wonder if this can affect the server perfomance. Another solution could be to create a location for each file extension that only adds the correct Content-Type header. This is certainly less maintainable than a map, but is it better for the perfomance ? Does not exist any native Nginx directive (or in a third party module) that permits Content-Type overriding (using Nginx mime.types "types" file) for proxied requests ? Thanks again! --- Andrea Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239473,239502#msg-239502 From nginx-forum at nginx.us Fri May 24 11:10:47 2013 From: nginx-forum at nginx.us (vlad031@binkmail.com) Date: Fri, 24 May 2013 07:10:47 -0400 Subject: valid_referers dynamic hostname In-Reply-To: <20130520183401.GJ69760@mdounin.ru> References: <20130520183401.GJ69760@mdounin.ru> Message-ID: <3797bd2e6617b18af9139c0e2d1212ad.NginxMailingListEnglish@forum.nginx.org> Thanks alot! I made a logical error when writing your expression by thinking that it will negate the comparison. Also, I appreciate you have explained that http/https matching as I was confused. My best regards, Vlad Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, May 20, 2013 at 02:14:02PM -0400, vlad031 at binkmail.com wrote: > > > Hello, > > > > Thank you for your example Maxim. This is what I've wrote in my > config: > > > > set $temp "$host:$http_referer"; > > > > valid_referers none blocked server_names ~\.google\. ~\.yahoo\. > ~\.bing\. > > ~\.ask\. ~\.live\. ~\.googleusercontent.com\. ; > > > > if ($invalid_referer){ > > set $test A ; > > } > > > > if ($temp ~* "^(.*):http?://\1") { > > set $test "${test}B"; > > } > > Just a side note: this statement isn't needed. Both http and > https schemes are allowed by a "https?" in the regular expression > I provided, "?" makes preceeding character option. > > > > > if ($temp ~* "^(.*):https?://\1") { > > set $test "${test}C"; > > } > > > > if ($test = ABC) { > > return 444 ; > > } > > > > It is always returning 444 ... what am I doing wrong?! > > You probably mean to write > > if ($test = A) { > return 444; > } > > instead, as your initial message suggests you want to allow > requests where Referer matches Host. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239318,239507#msg-239507 From mdounin at mdounin.ru Fri May 24 13:07:51 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 May 2013 17:07:51 +0400 Subject: proxy only certain assets based on host header? In-Reply-To: <3734abceb7315bb9c8347a6200f7be85.NginxMailingListEnglish@forum.nginx.org> References: <3734abceb7315bb9c8347a6200f7be85.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130524130751.GE69760@mdounin.ru> Hello! On Thu, May 23, 2013 at 09:07:21PM -0400, amagad wrote: > We're trying to proxy only certain assets like png|jpg|css only when the > host header is a certain DNS name. I tried to do this in the proxy.conf file > using something the example below but it doesnt like the if statement. Is > there a way to have nginx do what I am looking for? > > > if ($http_host = dnsname.com) { > location ~ ^/(stylesheets|images|javascripts|tools|flash|components)/ { > proxy_pass http://assethost > } > } Yes, sure. Use separate server{} block for a domain name you want to be handled specially, e.g.: server { server_name foo.example.com; location / { # just serve anything as static } } server { server_name bar.example.com; location / { # server anything as static... } location /images/ { # ... but proxy images to a backend proxy_pass http://backend; } } -- Maxim Dounin http://nginx.org/en/donation.html From reallfqq-nginx at yahoo.fr Fri May 24 16:58:44 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 24 May 2013 12:58:44 -0400 Subject: Rewriting In-Reply-To: <43875.1369375318.17986301947901706240@ffe16.ukr.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> <519E8E63.50409@bulinfo.net> <43875.1369375318.17986301947901706240@ffe16.ukr.net> Message-ID: Pure wild guess: Maybe a missing trailing slash in the request resulting in a temporary redirection (and then processed again)? Have you checked the requests made on the client side for any sign of unwanted redirection? You could then use them to correct your rewrite directive. Hope I helped, --- *B. R.* On Fri, May 24, 2013 at 2:01 AM, wishmaster wrote: > > Off course, you right. Thanks. > > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > rewrite ^/(.*)$ /unav/$1 break; > } > > location / { > .... > } > ........ > > But in log > > 2013/05/24 08:49:45 [error] 76017#0: *1910 open() > "/usr/local/www/akbmaster/unav/unav/index.html" failed (2: No such file or > directory), client: 190.212.201.198, server: akbmaster.server.com, > request: "GET / HTTP/1.1", host: "akbmaster.server.com" > > I see twice rewriting. > I have rewritten rule like this and this solved twice rewriting problem. > rewrite ^/([^/]*)$ /unav/$1 break; > > Can you explain me why in my situation nginx have rewritten request twice? > > Cheers, > > > > Hello, > > > > http://nginx.org/en/docs/http/request_processing.html > > > > Because in this case you need to place your rule in server block if you > > would like to be valid for all custom defined locations in your config > > For example: > > > > server > > { > > listen 80; > > > > if ($remote_addr ~ '192.168.1.25') > > { > > return 403; > > } > > > > location / > > { > > ... > > } > > > > location ~ \.php$ > > { > > ... > > } > > > > } > > > > On 23.05.2013 23:11 ?., wishmaster wrote: > > > Hi, > > > I use opencart with nginx+php-fpm. Sometimes it is necessary to > redirect all clients, except admin (190.212.201.0/24), to "Service > unavailable" page which is simple index.html file with logo, background > image and some text, located in /unav directory. > > > Below some of nginx.conf > > > > > > > > > location / { > > > > > > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > > > rewrite ^/(.*)$ /unav/$1 break; > > > return 403; > > > } > > > try_files $uri $uri/ @opencart; > > > } > > > > > > location ^~ /unav { > > > } > > > > > > location @opencart { > > > rewrite ^/(.+)$ /index.php?_route_=$1 last; > > > } > > > > > > [...skipped...] > > > > > > location ~ \.php$ { > > > > > > try_files $uri =404; > > > fastcgi_read_timeout 60s; > > > fastcgi_send_timeout 60s; > > > include myphp-fpm.conf; > > > > > > } > > > > > > Problem is in rewriting. > > > This rule > > > > > > rewrite ^/(.*)$ /unav/$1 break; > > > > > > rewrite ONLY http://mysite.com/ request but in case > http://mysite.com/index.php rewrite is none and request processed by > location ~ \.php$ rule. Why?? > > > > > > Thanks! > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebelk at gmail.com Fri May 24 17:39:58 2013 From: sebelk at gmail.com (Sergio Belkin) Date: Fri, 24 May 2013 14:39:58 -0300 Subject: Rewriting and proxy problem Message-ID: H folks! I am completeley newbie to nginx I have the following config # Forward request to /demo to tomcat. This is for # the BigBlueButton api demos. location /demo { rewrite ^ /upvc; proxy_pass http://127.0.0.1:8080; proxy_redirect default; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Allow 30M uploaded presentation document. client_max_body_size 30m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; include fastcgi_params; } location /upvc { alias /var/lib/tomcat6/webapps/demo; index demo3.jsp; expires 1m; } Rewrite is working but nginx is not. proxying to tomcat, because of that returns the jsp file as a plain text file. Please could you help me? Thanks in advance! -- -- Sergio Belkin http://www.sergiobelkin.com Watch More TV http://sebelk.blogspot.com LPIC-2 Certified - http://www.lpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri May 24 19:06:00 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 24 May 2013 15:06:00 -0400 Subject: Rewriting and proxy problem In-Reply-To: References: Message-ID: Let's start by a huge RTFM? http://nginx.org/en/docs/ This ML is no customer service for lazy people, I guess. You may up for services to make other people do your job: http://nginx.com/services.html Best regards, --- *B. R.* On Fri, May 24, 2013 at 1:39 PM, Sergio Belkin wrote: > H folks! > > I am completeley newbie to nginx > > I have the following config > > # Forward request to /demo to tomcat. This is for > # the BigBlueButton api demos. > location /demo { > rewrite ^ /upvc; > proxy_pass http://127.0.0.1:8080; > proxy_redirect default; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > # Allow 30M uploaded presentation document. > client_max_body_size 30m; > client_body_buffer_size 128k; > > proxy_connect_timeout 90; > proxy_send_timeout 90; > proxy_read_timeout 90; > > proxy_buffer_size 4k; > proxy_buffers 4 32k; > proxy_busy_buffers_size 64k; > proxy_temp_file_write_size 64k; > > include fastcgi_params; > } > > > location /upvc { > alias /var/lib/tomcat6/webapps/demo; > index demo3.jsp; > expires 1m; > } > > > Rewrite is working but nginx is not. proxying to tomcat, because of > that returns the jsp file as a plain text file. > > Please could you help me? > > Thanks in advance! > -- > -- > Sergio Belkin http://www.sergiobelkin.com > Watch More TV http://sebelk.blogspot.com > LPIC-2 Certified - http://www.lpi.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri May 24 21:16:03 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 24 May 2013 17:16:03 -0400 Subject: Performance trouble after migration Squeeze to Wheezy Message-ID: Hello, First of all I need to emphasize the fact that I know WHeezy is not yet supported. What I am trying to determine how WHeezy could impact Nginx (compiled for Squeeze) performance. Since I made the upgrade, big files are being served in a slow fashion (~200 kiB/s). The directory serving them is configured with AIO and worked perfectly before system changes. I know Sergey listed some changes in the dependencies, but what precisely would explain such a slow-down? Is there a defect somewhere that I can work on or just be patient and wait for the release of the Wheezy build of Nginx? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat May 25 09:27:16 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 25 May 2013 10:27:16 +0100 Subject: Rewriting and proxy problem In-Reply-To: References: Message-ID: <20130525092716.GE27406@craic.sysops.org> On Fri, May 24, 2013 at 02:39:58PM -0300, Sergio Belkin wrote: Hi there, > I am completeley newbie to nginx Welcome. The nginx config follows its own logic, which may not match your previous experiences. When you understand that, you'll have a much better chance of knowing the configuration you are looking for. One important feature is that one request is handled in one location. Another is that one http request does not necessarily correspond to one nginx request. In this case... you make the request for /demoX, and the best-match location is "location /demo", and so that is the one that is used. > location /demo { > rewrite ^ /upvc; Once that happens, you are using the new internal-to-nginx request "/upvc", so a new choice for best-match location happens, and the rest of this location{} block is not relevant. > proxy_pass http://127.0.0.1:8080; > include fastcgi_params; Aside: the fastcgi_params file will typically have content relevant for when fastcgi_pass is used, not for when proxy_pass is used. So, the http request for /demoX leads to the nginx request for /upvc, which matches this location: > location /upvc { > alias /var/lib/tomcat6/webapps/demo; > index demo3.jsp; > expires 1m; And here, you say "serve it from the filesystem", so that's what it does. (I suspect that you actually get a http redirect to /upvc/, which then returns the content of /var/lib/tomcat6/webapps/demo/demo3.jsp. Using "curl" as the browser tends to make clear what is happening.) > } > Rewrite is working but nginx is not. proxying to tomcat, because of that > returns the jsp file as a plain text file. > > Please could you help me? The hardest part of nginx config that I find, it working out what exactly you want to have happen for each request. >From the above sample config, I'm not sure what it is that you want. Perhaps putting the proxy_pass in the "location /upvc" block will work? Or perhaps removing the rewrite? If you can describe what behaviour you want, then possibly the nginx config to achieve it will become clear. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat May 25 10:07:15 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 25 May 2013 11:07:15 +0100 Subject: Rewriting In-Reply-To: <43875.1369375318.17986301947901706240@ffe16.ukr.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> <519E8E63.50409@bulinfo.net> <43875.1369375318.17986301947901706240@ffe16.ukr.net> Message-ID: <20130525100715.GF27406@craic.sysops.org> On Fri, May 24, 2013 at 09:01:58AM +0300, wishmaster wrote: Hi there, You've got a few different things happening here, and I suspect that the combination leads to you not getting the results you want. > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > rewrite ^/(.*)$ /unav/$1 break; > } So if I request "/file.php", this will rewrite to "/unav/file.php", which probably doesn't exist, so I'm going to get a 404 unless you handle it carefully. > 2013/05/24 08:49:45 [error] 76017#0: *1910 open() "/usr/local/www/akbmaster/unav/unav/index.html" failed (2: No such file or directory), client: 190.212.201.198, server: akbmaster.server.com, request: "GET / HTTP/1.1", host: "akbmaster.server.com" > > I see twice rewriting. The initial request was for "/". The explicit rewrite will make that be "/unav/" (presuming that the rewrite applies here). In the matching location{}, "/unav/" corresponds to a directory, not a file, so there is an implicit rewrite to "/unav/index.html". This then goes through the config again, and the explicit rewrite makes that "/unav/unav/index.html". In the matching location{}, that does not correspond to anything on the filesystem, hence 404. > I have rewritten rule like this and this solved twice rewriting problem. > rewrite ^/([^/]*)$ /unav/$1 break; What happens with that if my initial request is for "/dir/file.php"? Back you your original issue: > > > I use opencart with nginx+php-fpm. Sometimes it is necessary to redirect all clients, except admin (190.212.201.0/24), to "Service unavailable" page which is simple index.html file with logo, background image and some text, located in /unav directory. So if I ask for "/dir/file.php" during this time, what response do you want me to get? The contents of one file? A redirection to a separate url? Something else? When you can answer that question, you'll have a better idea of the configuration you need. I suspect that the final logic will be something like: if this is not an admin address if the request is for /unav/something, then serve the file else return http 503 with the content corresponding to /unav/index.html else give full normal access to the site and that that can be done with "if" and "rewrite", possibly helped by "map"; but the details depend on what exactly you want. f -- Francis Daly francis at daoine.org From quintessence at bulinfo.net Sat May 25 15:36:16 2013 From: quintessence at bulinfo.net (Bozhidara Marinchovska) Date: Sat, 25 May 2013 18:36:16 +0300 Subject: Rewriting In-Reply-To: <43875.1369375318.17986301947901706240@ffe16.ukr.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> <519E8E63.50409@bulinfo.net> <43875.1369375318.17986301947901706240@ffe16.ukr.net> Message-ID: <51A0DA70.8060108@bulinfo.net> Hello, Because you placed root inside the location /. As http://wiki.nginx.org/Pitfalls your root should also be placed in the server block. Example: server { listen 80; root /usr/local/www/test; if ($remote_addr ~ '192.168.1.25') { rewrite ^/(.*)$ /unav/index.html break; } location / { index index.html; } location /index.php { ... } } } Example content of index.html in /usr/local/www/test is "test" Example content of index.html in /usr/local/www/test/unav is "unav dir" Let we say your server is on 192.168.1.24: - when you access http://192.168.1.24/index.html from host 192.168.1.25 you will see in the browser index with content "unav dir". - when you access http://192.168.1.24/index.php from host 192.168.1.25 you will see in the browser index with content "unav dir". - when you access http://192.168.1.24/index.html (or index.php) from another host for example 192.168.1.24 or 192.168.1.23 you will see the content of index.html (index.php) from your config. In the example is content "test" of index.html and phpinfo() in the index.php. On 24.05.2013 09:01 ?., wishmaster wrote: > Off course, you right. Thanks. > > if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { > rewrite ^/(.*)$ /unav/$1 break; > } > > location / { > .... > } > ........ > > But in log > > 2013/05/24 08:49:45 [error] 76017#0: *1910 open() "/usr/local/www/akbmaster/unav/unav/index.html" failed (2: No such file or directory), client: 190.212.201.198, server: akbmaster.server.com, request: "GET / HTTP/1.1", host: "akbmaster.server.com" > > I see twice rewriting. > I have rewritten rule like this and this solved twice rewriting problem. > rewrite ^/([^/]*)$ /unav/$1 break; > > Can you explain me why in my situation nginx have rewritten request twice? > > Cheers, > > >> Hello, >> >> http://nginx.org/en/docs/http/request_processing.html >> >> Because in this case you need to place your rule in server block if you >> would like to be valid for all custom defined locations in your config >> For example: >> >> server >> { >> listen 80; >> >> if ($remote_addr ~ '192.168.1.25') >> { >> return 403; >> } >> >> location / >> { >> ... >> } >> >> location ~ \.php$ >> { >> ... >> } >> >> } >> >> On 23.05.2013 23:11 ?., wishmaster wrote: >>> Hi, >>> I use opencart with nginx+php-fpm. Sometimes it is necessary to redirect all clients, except admin (190.212.201.0/24), to "Service unavailable" page which is simple index.html file with logo, background image and some text, located in /unav directory. >>> Below some of nginx.conf >>> >>> >>> location / { >>> >>> if ($remote_addr !~ '190\.212\.201\.[0-9]{0,3}') { >>> rewrite ^/(.*)$ /unav/$1 break; >>> return 403; >>> } >>> try_files $uri $uri/ @opencart; >>> } >>> >>> location ^~ /unav { >>> } >>> >>> location @opencart { >>> rewrite ^/(.+)$ /index.php?_route_=$1 last; >>> } >>> >>> [...skipped...] >>> >>> location ~ \.php$ { >>> >>> try_files $uri =404; >>> fastcgi_read_timeout 60s; >>> fastcgi_send_timeout 60s; >>> include myphp-fpm.conf; >>> >>> } >>> >>> Problem is in rewriting. >>> This rule >>> >>> rewrite ^/(.*)$ /unav/$1 break; >>> >>> rewrite ONLY http://mysite.com/ request but in case http://mysite.com/index.php rewrite is none and request processed by location ~ \.php$ rule. Why?? >>> >>> Thanks! >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Sat May 25 17:01:32 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 25 May 2013 13:01:32 -0400 Subject: fastcgi_read_timeout with PHP backend Message-ID: Hello, I am trying to understand how fastcgi_read_timout works in Nginx. Here is what I wanna do: I list files (few MB each) on a distant place which I copy one by one (loop) on the local disk through PHP. I do not know the amount of files I need to copy, thus I do not know the total amount of time I need for the script to finish its execution. What I know is that I can ensure is a processing time limit per file. I would like my script not to be forcefully interrupted by either sides (PHP or Nginx) before completion. What I did so far: - PHP has a 'max_execution_time' of 30s (default?). In the loop copying files, I use the set_time_limit() procedure to reinitialize the limit before each file copy, hence each file processing has 30s to go: way enough! - The problem seems to lie on the Nginx side, with the 'fastcgi_read_timeout' configuration entry. I can't ensure what maximum time I need, and I would like not to use way-off values such as 2 weeks or 1 year there. ;o) What I understood from the documentationis that the timeout is reinitialized after a successful read: am I right? The challenge is now to cut any buffering occurring on the PHP side and let Nginx manage it (since the buffering will occur after content is being read from the backend). Here is what I did: * PHP's zlib.output_compression is deactivated by default in PHP * I deactivated PHP's output_buffering (default is 4096 bytes) * I am using the PHP flush() procedure at the end of each iteration of the copying loop, after a message is written to the output Current state: * The script seems to still be cut after the expiration of the 'fastcgi_read_timout' limit (confirmed by the error log entry 'upstream timed out (110: Connection timed out) while reading upstream') * The PHP loop is entered several times since multiple files have been copied * The output sent to the browser is cut before any output from the loop appears It seems that there is still some unwanted buffering on the PHP side. I also note that the PHP's flush() procedure doesn't seem to work since the output in the browser doesn't contain any message written after eahc file copy. Am I misunderstanding something about Nginx here (especially about the 'fastcgi_read_timeout' directive)? Have you any intel/piece of advice on hte matter? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From baalchina at gmail.com Sun May 26 04:03:58 2013 From: baalchina at gmail.com (baalchina) Date: Sun, 26 May 2013 12:03:58 +0800 Subject: nginx got an 500 error when uploading files Message-ID: Hi everyone, I am using nginx 1.4.1 and php-fpm 5.3.25. Everything is fine except when I uploading files to server. I have a Discuz! x3(a forum system) and when I post threads, except attachment everything is fine, but when I add a small files(such as 30k rar,jpg or else), nginx give me a 500 error. I checked the nginx/site error, but have no usefule information. nginx only tell me a got an 500 error, but didn't tell me why. I tried to increase the fast_cgi parama: fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 1024k; fastcgi_buffers 32 64k; fastcgi_busy_buffers_size 1500k; fastcgi_temp_file_write_size 1800k; But still the same. So, please help me for solve this problem, I wonder where can I got the error message? Thank you. -- from:baalchina -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun May 26 09:23:42 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 26 May 2013 10:23:42 +0100 Subject: Rewriting In-Reply-To: <53062.1369482037.4589472283774418944@ffe11.ukr.net> References: <47065.1369339909.15057004780639354880@ffe16.ukr.net> <519E8E63.50409@bulinfo.net> <43875.1369375318.17986301947901706240@ffe16.ukr.net> <20130525100715.GF27406@craic.sysops.org> <53062.1369482037.4589472283774418944@ffe11.ukr.net> Message-ID: <20130526092342.GG27406@craic.sysops.org> On Sat, May 25, 2013 at 02:40:37PM +0300, wishmaster wrote: [Back to the list] Hi there, > > I suspect that the final logic will be something like: > > > > if this is not an admin address > > if the request is for /unav/something, then serve the file > > else return http 503 with the content corresponding to /unav/index.html > > else > > give full normal access to the site > > My logic is: for all requests (from not an admin) like http://example.org/ use rewriting and show for clients simple temp html page. I think that that statement is not specific enough for what you actually want. Because your html page "/unav/index.html" will include an image "/unav/img.png", and you want that not to count as "all requests". > But I think, I have solved it :) If this does what you want it to, that's good. When I tested, I saw some problems. > if ($remote_addr ~ '_admin_IP_') { > rewrite ^/(.*)$ /unav/$1 last; > } > > location ^~ /unav/ { > try_files $uri $uri/ /index.html; > } That will serve the content of /unav/index.html for all requests (unless you have an /unav/unav/img.png, for example), but it goes through lots of rewrites to get there. > location / { > > try_files $uri $uri/ @opencart; > } > > location @opencart { > rewrite ^/(.+)$ /index.php?_route_=$1 last; > } > > ... and so on ..... My suggestion: use "geo" or "map" or even "if" to set a variable $site_looks_down, which is "1" for users and "0" for admins. if that variable is true, do nothing special for /unav/* urls, and return 503 for the rest. have the 503 error return the content of /unav/index.html have /unav/index.html include links like /unav/img.png Overall, this is something like: http { geo $site_looks_down { default 1; 127.0.0.3 0; } server { if ($site_looks_down) { rewrite ^/unav/ $uri last; return 503; } error_page 503 /unav/index.html; location ^~ /unav/ { } # extra location{}s and server-level config here } } Now you can do things like curl -i http://127.0.0.1/unav/img.png curl -i http://127.0.0.3/unav/img.png and you should see the same image response, while things like curl -i http://127.0.0.1/sample/ curl -i http://127.0.0.3/sample/ should give you different responses -- one "503 unavailable", and one correct (maybe 404, maybe useful content). Change the "default" value to 0, or (better) just remove the if() block, to make the site available to all again. See documentation at things like http://nginx.org/r/geo for the syntax. f -- Francis Daly francis at daoine.org From g.plumb at gmail.com Sun May 26 14:00:20 2013 From: g.plumb at gmail.com (Gee) Date: Sun, 26 May 2013 15:00:20 +0100 Subject: Mono MVC Timeout Message-ID: Hi everyone! I am struggling to get a (simple) MVC app work under OpenBSD (5.3) + Mono (2.10.9) + Nginx (1.2.6) I can get a simple index.aspx (Hello World) app to work, but the default MVC 3 template doesn't seem to work. I made some minor changes to remove any reference to SQL providers in Web.config and I removed EntityFramework.dll, as well as ensuring that the various System.Web.* dlls also co-existed in my \bin folder. All I seem to get is this: 504 Gateway Time-out nginx/1.2.6 This is how I am launching fastcgi-mono-server4: fastcgi-mono-server4 /applications/:. /socket=tcp:127.0.0.1:9000/printlog=True But the only log detail I get is this: [DateTimeStamp] Beginning to receive records on connection. I have uploaded my config and source code to here: http://www.4shared.com/rar/FzSLuLnF/MvcMonoNginx.html Given that I can serve a simple 'Hello World' page, I'm pretty sure there isn't anything wrong with my base nginx/mono installs, but no doubt something subtle I have missed in my configs. Can anyone (please) shed any light on this? Thanks! G -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun May 26 15:53:32 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 26 May 2013 16:53:32 +0100 Subject: Mono MVC Timeout In-Reply-To: References: Message-ID: How soon after making a request do you see the 504 and how does it relate to your fastcgi_read_timeout setting? (http://wiki.nginx.org/HttpFastcgiModule#fastcgi_read_timeout). [ Whilst I might have been able to check your config for this, I'm not going to bother to download a random rar file and unpack it. Learn to pastebin/gist/etc! ;-) ] Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From jan.teske at gmail.com Sun May 26 21:19:11 2013 From: jan.teske at gmail.com (Jan Teske) Date: Sun, 26 May 2013 23:19:11 +0200 Subject: NGINX error log format documentation In-Reply-To: <20130523193139.GD27406@craic.sysops.org> References: <519E644E.1060302@gmail.com> <20130523193139.GD27406@craic.sysops.org> Message-ID: <3F1DA26A-71F8-47BD-8FB3-92B05FA3E29B@gmail.com> That was helpful. Thank you! On May 23, 2013, at 21:31 , Francis Daly wrote: > On Thu, May 23, 2013 at 08:47:42PM +0200, Jan Teske wrote: > > Hi there, > >> I want to parse NGINX error logs. However, I did not find any >> documentation concerning the used log format. > > less src/core/ngx_log.c > > should probably show most of what you need. Combine that with > > grep -r -A 2 ngx_log_error src > > and looking at an error log and you should see that it is "a small number > (five, I think) of fixed fields, followed by free-form text". > >> While the meaning of some >> fields like the data is pretty obvious, for some it is not at all. In >> addition, I cannot be sure that my parser is complete if I do not have a >> documentation of all the possible fields. Since it seems you can change >> the access log format, but not that of the error log, I really have no >> idea how to get the information I need. > > I (strongly) suspect that the error log line format details is not > something that nginx wants to commit to holding stable. > > If you want to do any more parsing than "free-form text after a handful of > common fields", then you'll probably want to care about the version you > are using. Or at least, flag an imperfectly-recognised line if anything > doesn't match what you expect. > >> Is there such documentation? > > It's hard to beat the contents of src/ for accuracy. > > Choose the "identifying" string in the line you care about, find the > matching ngx_error_log call, and then see what free-form text it provides > in the current version. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Mon May 27 01:31:32 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 26 May 2013 21:31:32 -0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: Message-ID: No ideas? --- *B. R.* On Sat, May 25, 2013 at 1:01 PM, B.R. wrote: > Hello, > > I am trying to understand how fastcgi_read_timout works in Nginx. > > Here is what I wanna do: > I list files (few MB each) on a distant place which I copy one by one > (loop) on the local disk through PHP. > I do not know the amount of files I need to copy, thus I do not know the > total amount of time I need for the script to finish its execution. What I > know is that I can ensure is a processing time limit per file. > I would like my script not to be forcefully interrupted by either sides > (PHP or Nginx) before completion. > > > What I did so far: > - PHP has a 'max_execution_time' of 30s (default?). In the loop copying > files, I use the set_time_limit() procedure to reinitialize the limit > before each file copy, hence each file processing has 30s to go: way enough! > > - The problem seems to lie on the Nginx side, with the > 'fastcgi_read_timeout' configuration entry. > I can't ensure what maximum time I need, and I would like not to use > way-off values such as 2 weeks or 1 year there. ;o) > What I understood from the documentationis that the timeout is reinitialized after a successful read: am I right? > > The challenge is now to cut any buffering occurring on the PHP side and > let Nginx manage it (since the buffering will occur after content is being > read from the backend). Here is what I did: > * PHP's zlib.output_compression is deactivated by default in PHP > * I deactivated PHP's output_buffering (default is 4096 bytes) > * I am using the PHP flush() procedure at the end of each iteration of the > copying loop, after a message is written to the output > > > Current state: > * The script seems to still be cut after the expiration of the > 'fastcgi_read_timout' limit (confirmed by the error log entry 'upstream > timed out (110: Connection timed out) while reading upstream') > * The PHP loop is entered several times since multiple files have been > copied > * The output sent to the browser is cut before any output from the loop > appears > > It seems that there is still some unwanted buffering on the PHP side. > I also note that the PHP's flush() procedure doesn't seem to work since > the output in the browser doesn't contain any message written after eahc > file copy. > > Am I misunderstanding something about Nginx here (especially about the > 'fastcgi_read_timeout' directive)? > Have you any intel/piece of advice on hte matter? > > Thanks, > --- > *B. R.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon May 27 01:46:04 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 27 May 2013 13:46:04 +1200 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: Message-ID: <1369619164.17578.173.camel@steve-new> Write a script that lists the remote files, then checks for the existence of the file locally, and copy it if it doesn't exist? That way no internal loop is used - use a different exit code to note whether there was one copied, or there were none ready. That way you scale for a single file transfer. There's nothing to be gained from looping internally - well performance-wise that is. Steve On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > No ideas? > > --- > B. R. > > > On Sat, May 25, 2013 at 1:01 PM, B.R. wrote: > Hello, > > > I am trying to understand how fastcgi_read_timout works in > Nginx. > > > Here is what I wanna do: > > I list files (few MB each) on a distant place which I copy one > by one (loop) on the local disk through PHP. > > I do not know the amount of files I need to copy, thus I do > not know the total amount of time I need for the script to > finish its execution. What I know is that I can ensure is a > processing time limit per file. > > I would like my script not to be forcefully interrupted by > either sides (PHP or Nginx) before completion. > > > > What I did so far: > > - PHP has a 'max_execution_time' of 30s (default?). In the > loop copying files, I use the set_time_limit() procedure to > reinitialize the limit before each file copy, hence each file > processing has 30s to go: way enough! > > > - The problem seems to lie on the Nginx side, with the > 'fastcgi_read_timeout' configuration entry. > > I can't ensure what maximum time I need, and I would like not > to use way-off values such as 2 weeks or 1 year there. ;o) > > What I understood from the documentation is that the timeout > is reinitialized after a successful read: am I right? > > > The challenge is now to cut any buffering occurring on the PHP > side and let Nginx manage it (since the buffering will occur > after content is being read from the backend). Here is what I > did: > > * PHP's zlib.output_compression is deactivated by default in > PHP > > * I deactivated PHP's output_buffering (default is 4096 bytes) > > * I am using the PHP flush() procedure at the end of each > iteration of the copying loop, after a message is written to > the output > > > > Current state: > > * The script seems to still be cut after the expiration of the > 'fastcgi_read_timout' limit (confirmed by the error log entry > 'upstream timed out (110: Connection timed out) while reading > upstream') > > * The PHP loop is entered several times since multiple files > have been copied > > * The output sent to the browser is cut before any output from > the loop appears > > > It seems that there is still some unwanted buffering on the > PHP side. > > I also note that the PHP's flush() procedure doesn't seem to > work since the output in the browser doesn't contain any > message written after eahc file copy. > > > Am I misunderstanding something about Nginx here (especially > about the 'fastcgi_read_timeout' directive)? > > Have you any intel/piece of advice on hte matter? > > Thanks, > > --- > B. R. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz MSN: steve at greengecko.co.nz Skype: sholdowa From reallfqq-nginx at yahoo.fr Mon May 27 02:11:03 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 26 May 2013 22:11:03 -0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: <1369619164.17578.173.camel@steve-new> References: <1369619164.17578.173.camel@steve-new> Message-ID: Thanks for your answer. I didn't go into specifics because my problem doesn't rely at the application-level logic. What you describe is what my script does already. However in this particular case I have 16 files weighting each a few MB which need to be transfered back at once. PHP allocates 30s for each loop turn (far enough to copy the file + echo some output message about successes/failed completion). Nginx cuts the execution avec fastcgi_read_timeout time even with my efforts to cut down any buffering on PHP side (thus forcing the output to be sent to Nginx to reinitialize the timeout counter). That Nginx action is the center of my attention right now. How can I get read of it in a scalable fashion (ie no fastcgi_read_time = 9999999) ? --- *B. R.* * * On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway wrote: > Write a script that lists the remote files, then checks for the > existence of the file locally, and copy it if it doesn't exist? That way > no internal loop is used - use a different exit code to note whether > there was one copied, or there were none ready. > > That way you scale for a single file transfer. There's nothing to be > gained from looping internally - well performance-wise that is. > > Steve > > On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > > No ideas? > > > > --- > > B. R. > > > > > > On Sat, May 25, 2013 at 1:01 PM, B.R. wrote: > > Hello, > > > > > > I am trying to understand how fastcgi_read_timout works in > > Nginx. > > > > > > Here is what I wanna do: > > > > I list files (few MB each) on a distant place which I copy one > > by one (loop) on the local disk through PHP. > > > > I do not know the amount of files I need to copy, thus I do > > not know the total amount of time I need for the script to > > finish its execution. What I know is that I can ensure is a > > processing time limit per file. > > > > I would like my script not to be forcefully interrupted by > > either sides (PHP or Nginx) before completion. > > > > > > > > What I did so far: > > > > - PHP has a 'max_execution_time' of 30s (default?). In the > > loop copying files, I use the set_time_limit() procedure to > > reinitialize the limit before each file copy, hence each file > > processing has 30s to go: way enough! > > > > > > - The problem seems to lie on the Nginx side, with the > > 'fastcgi_read_timeout' configuration entry. > > > > I can't ensure what maximum time I need, and I would like not > > to use way-off values such as 2 weeks or 1 year there. ;o) > > > > What I understood from the documentation is that the timeout > > is reinitialized after a successful read: am I right? > > > > > > The challenge is now to cut any buffering occurring on the PHP > > side and let Nginx manage it (since the buffering will occur > > after content is being read from the backend). Here is what I > > did: > > > > * PHP's zlib.output_compression is deactivated by default in > > PHP > > > > * I deactivated PHP's output_buffering (default is 4096 bytes) > > > > * I am using the PHP flush() procedure at the end of each > > iteration of the copying loop, after a message is written to > > the output > > > > > > > > Current state: > > > > * The script seems to still be cut after the expiration of the > > 'fastcgi_read_timout' limit (confirmed by the error log entry > > 'upstream timed out (110: Connection timed out) while reading > > upstream') > > > > * The PHP loop is entered several times since multiple files > > have been copied > > > > * The output sent to the browser is cut before any output from > > the loop appears > > > > > > It seems that there is still some unwanted buffering on the > > PHP side. > > > > I also note that the PHP's flush() procedure doesn't seem to > > work since the output in the browser doesn't contain any > > message written after eahc file copy. > > > > > > Am I misunderstanding something about Nginx here (especially > > about the 'fastcgi_read_timeout' directive)? > > > > Have you any intel/piece of advice on hte matter? > > > > Thanks, > > > > --- > > B. R. > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon May 27 02:24:20 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 27 May 2013 14:24:20 +1200 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: <1369619164.17578.173.camel@steve-new> Message-ID: <1369621460.17578.192.camel@steve-new> Surely, you're still serialising the transfer with a loop? On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote: > Thanks for your answer. > > I didn't go into specifics because my problem doesn't rely at the > application-level logic. > > What you describe is what my script does already. > > > However in this particular case I have 16 files weighting each a few > MB which need to be transfered back at once. > > > PHP allocates 30s for each loop turn (far enough to copy the file + > echo some output message about successes/failed completion). > > Nginx cuts the execution avec fastcgi_read_timeout time even with my > efforts to cut down any buffering on PHP side (thus forcing the output > to be sent to Nginx to reinitialize the timeout counter). > > That Nginx action is the center of my attention right now. How can I > get read of it in a scalable fashion (ie no fastcgi_read_time = > 9999999) ? > --- > B. R. > > > > > On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway > wrote: > Write a script that lists the remote files, then checks for > the > existence of the file locally, and copy it if it doesn't > exist? That way > no internal loop is used - use a different exit code to note > whether > there was one copied, or there were none ready. > > That way you scale for a single file transfer. There's nothing > to be > gained from looping internally - well performance-wise that > is. > > Steve > > On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > > No ideas? > > > > --- > > B. R. > > > > > > On Sat, May 25, 2013 at 1:01 PM, B.R. > wrote: > > Hello, > > > > > > I am trying to understand how fastcgi_read_timout > works in > > Nginx. > > > > > > Here is what I wanna do: > > > > I list files (few MB each) on a distant place which > I copy one > > by one (loop) on the local disk through PHP. > > > > I do not know the amount of files I need to copy, > thus I do > > not know the total amount of time I need for the > script to > > finish its execution. What I know is that I can > ensure is a > > processing time limit per file. > > > > I would like my script not to be forcefully > interrupted by > > either sides (PHP or Nginx) before completion. > > > > > > > > What I did so far: > > > > - PHP has a 'max_execution_time' of 30s (default?). > In the > > loop copying files, I use the set_time_limit() > procedure to > > reinitialize the limit before each file copy, hence > each file > > processing has 30s to go: way enough! > > > > > > - The problem seems to lie on the Nginx side, with > the > > 'fastcgi_read_timeout' configuration entry. > > > > I can't ensure what maximum time I need, and I would > like not > > to use way-off values such as 2 weeks or 1 year > there. ;o) > > > > What I understood from the documentation is that the > timeout > > is reinitialized after a successful read: am I > right? > > > > > > The challenge is now to cut any buffering occurring > on the PHP > > side and let Nginx manage it (since the buffering > will occur > > after content is being read from the backend). Here > is what I > > did: > > > > * PHP's zlib.output_compression is deactivated by > default in > > PHP > > > > * I deactivated PHP's output_buffering (default is > 4096 bytes) > > > > * I am using the PHP flush() procedure at the end of > each > > iteration of the copying loop, after a message is > written to > > the output > > > > > > > > Current state: > > > > * The script seems to still be cut after the > expiration of the > > 'fastcgi_read_timout' limit (confirmed by the error > log entry > > 'upstream timed out (110: Connection timed out) > while reading > > upstream') > > > > * The PHP loop is entered several times since > multiple files > > have been copied > > > > * The output sent to the browser is cut before any > output from > > the loop appears > > > > > > It seems that there is still some unwanted buffering > on the > > PHP side. > > > > I also note that the PHP's flush() procedure doesn't > seem to > > work since the output in the browser doesn't contain > any > > message written after eahc file copy. > > > > > > Am I misunderstanding something about Nginx here > (especially > > about the 'fastcgi_read_timeout' directive)? > > > > Have you any intel/piece of advice on hte matter? > > > > Thanks, > > > > --- > > B. R. > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz MSN: steve at greengecko.co.nz Skype: sholdowa From reallfqq-nginx at yahoo.fr Mon May 27 02:38:52 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 26 May 2013 22:38:52 -0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: <1369621460.17578.192.camel@steve-new> References: <1369619164.17578.173.camel@steve-new> <1369621460.17578.192.camel@steve-new> Message-ID: One way or another, even if an external script is called, PHP will need to wait for the scripts completion, making the parallelization impossible or at least useless (since, to wait for a return code of an external script is still blocking). I am not trying to find a workaround, I need to know how the fastcgi_reload_timeout works (if I understood it properly), if I properly disabled PHP buffering for my example case and how eventually to control those timeouts. I'd like to address the central problem here, not closing my eyes on it. --- *B. R.* On Sun, May 26, 2013 at 10:24 PM, Steve Holdoway wrote: > Surely, you're still serialising the transfer with a loop? > > On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote: > > Thanks for your answer. > > > > I didn't go into specifics because my problem doesn't rely at the > > application-level logic. > > > > What you describe is what my script does already. > > > > > > However in this particular case I have 16 files weighting each a few > > MB which need to be transfered back at once. > > > > > > PHP allocates 30s for each loop turn (far enough to copy the file + > > echo some output message about successes/failed completion). > > > > Nginx cuts the execution avec fastcgi_read_timeout time even with my > > efforts to cut down any buffering on PHP side (thus forcing the output > > to be sent to Nginx to reinitialize the timeout counter). > > > > That Nginx action is the center of my attention right now. How can I > > get read of it in a scalable fashion (ie no fastcgi_read_time = > > 9999999) ? > > --- > > B. R. > > > > > > > > > > On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway > > wrote: > > Write a script that lists the remote files, then checks for > > the > > existence of the file locally, and copy it if it doesn't > > exist? That way > > no internal loop is used - use a different exit code to note > > whether > > there was one copied, or there were none ready. > > > > That way you scale for a single file transfer. There's nothing > > to be > > gained from looping internally - well performance-wise that > > is. > > > > Steve > > > > On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > > > No ideas? > > > > > > --- > > > B. R. > > > > > > > > > On Sat, May 25, 2013 at 1:01 PM, B.R. > > wrote: > > > Hello, > > > > > > > > > I am trying to understand how fastcgi_read_timout > > works in > > > Nginx. > > > > > > > > > Here is what I wanna do: > > > > > > I list files (few MB each) on a distant place which > > I copy one > > > by one (loop) on the local disk through PHP. > > > > > > I do not know the amount of files I need to copy, > > thus I do > > > not know the total amount of time I need for the > > script to > > > finish its execution. What I know is that I can > > ensure is a > > > processing time limit per file. > > > > > > I would like my script not to be forcefully > > interrupted by > > > either sides (PHP or Nginx) before completion. > > > > > > > > > > > > What I did so far: > > > > > > - PHP has a 'max_execution_time' of 30s (default?). > > In the > > > loop copying files, I use the set_time_limit() > > procedure to > > > reinitialize the limit before each file copy, hence > > each file > > > processing has 30s to go: way enough! > > > > > > > > > - The problem seems to lie on the Nginx side, with > > the > > > 'fastcgi_read_timeout' configuration entry. > > > > > > I can't ensure what maximum time I need, and I would > > like not > > > to use way-off values such as 2 weeks or 1 year > > there. ;o) > > > > > > What I understood from the documentation is that the > > timeout > > > is reinitialized after a successful read: am I > > right? > > > > > > > > > The challenge is now to cut any buffering occurring > > on the PHP > > > side and let Nginx manage it (since the buffering > > will occur > > > after content is being read from the backend). Here > > is what I > > > did: > > > > > > * PHP's zlib.output_compression is deactivated by > > default in > > > PHP > > > > > > * I deactivated PHP's output_buffering (default is > > 4096 bytes) > > > > > > * I am using the PHP flush() procedure at the end of > > each > > > iteration of the copying loop, after a message is > > written to > > > the output > > > > > > > > > > > > Current state: > > > > > > * The script seems to still be cut after the > > expiration of the > > > 'fastcgi_read_timout' limit (confirmed by the error > > log entry > > > 'upstream timed out (110: Connection timed out) > > while reading > > > upstream') > > > > > > * The PHP loop is entered several times since > > multiple files > > > have been copied > > > > > > * The output sent to the browser is cut before any > > output from > > > the loop appears > > > > > > > > > It seems that there is still some unwanted buffering > > on the > > > PHP side. > > > > > > I also note that the PHP's flush() procedure doesn't > > seem to > > > work since the output in the browser doesn't contain > > any > > > message written after eahc file copy. > > > > > > > > > Am I misunderstanding something about Nginx here > > (especially > > > about the 'fastcgi_read_timeout' directive)? > > > > > > Have you any intel/piece of advice on hte matter? > > > > > > Thanks, > > > > > > --- > > > B. R. > > > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Steve Holdoway BSc(Hons) MNZCS > > http://www.greengecko.co.nz > > MSN: steve at greengecko.co.nz > > Skype: sholdowa > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon May 27 02:44:22 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 27 May 2013 14:44:22 +1200 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: <1369619164.17578.173.camel@steve-new> <1369621460.17578.192.camel@steve-new> Message-ID: <1369622662.17578.197.camel@steve-new> OK, I leave you to it. However, asynchronously spawning subprocesses *will* allow you to parallelise the process. I'd call it design, rather than a workaround, but there you go (: Steve On Sun, 2013-05-26 at 22:38 -0400, B.R. wrote: > One way or another, even if an external script is called, PHP will > need to wait for the scripts completion, making the parallelization > impossible or at least useless (since, to wait for a return code of an > external script is still blocking). > > > I am not trying to find a workaround, I need to know how the > fastcgi_reload_timeout works (if I understood it properly), if I > properly disabled PHP buffering for my example case and how eventually > to control those timeouts. > > I'd like to address the central problem here, not closing my eyes on > it. > > > --- > B. R. > > > On Sun, May 26, 2013 at 10:24 PM, Steve Holdoway > wrote: > Surely, you're still serialising the transfer with a loop? > > On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote: > > Thanks for your answer. > > > > I didn't go into specifics because my problem doesn't rely > at the > > application-level logic. > > > > What you describe is what my script does already. > > > > > > However in this particular case I have 16 files weighting > each a few > > MB which need to be transfered back at once. > > > > > > PHP allocates 30s for each loop turn (far enough to copy the > file + > > echo some output message about successes/failed completion). > > > > Nginx cuts the execution avec fastcgi_read_timeout time even > with my > > efforts to cut down any buffering on PHP side (thus forcing > the output > > to be sent to Nginx to reinitialize the timeout counter). > > > > That Nginx action is the center of my attention right now. > How can I > > get read of it in a scalable fashion (ie no > fastcgi_read_time = > > 9999999) ? > > --- > > B. R. > > > > > > > > > > On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway > > wrote: > > Write a script that lists the remote files, then > checks for > > the > > existence of the file locally, and copy it if it > doesn't > > exist? That way > > no internal loop is used - use a different exit code > to note > > whether > > there was one copied, or there were none ready. > > > > That way you scale for a single file transfer. > There's nothing > > to be > > gained from looping internally - well > performance-wise that > > is. > > > > Steve > > > > On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > > > No ideas? > > > > > > --- > > > B. R. > > > > > > > > > On Sat, May 25, 2013 at 1:01 PM, B.R. > > wrote: > > > Hello, > > > > > > > > > I am trying to understand how > fastcgi_read_timout > > works in > > > Nginx. > > > > > > > > > Here is what I wanna do: > > > > > > I list files (few MB each) on a distant > place which > > I copy one > > > by one (loop) on the local disk through > PHP. > > > > > > I do not know the amount of files I need > to copy, > > thus I do > > > not know the total amount of time I need > for the > > script to > > > finish its execution. What I know is that > I can > > ensure is a > > > processing time limit per file. > > > > > > I would like my script not to be > forcefully > > interrupted by > > > either sides (PHP or Nginx) before > completion. > > > > > > > > > > > > What I did so far: > > > > > > - PHP has a 'max_execution_time' of 30s > (default?). > > In the > > > loop copying files, I use the > set_time_limit() > > procedure to > > > reinitialize the limit before each file > copy, hence > > each file > > > processing has 30s to go: way enough! > > > > > > > > > - The problem seems to lie on the Nginx > side, with > > the > > > 'fastcgi_read_timeout' configuration > entry. > > > > > > I can't ensure what maximum time I need, > and I would > > like not > > > to use way-off values such as 2 weeks or 1 > year > > there. ;o) > > > > > > What I understood from the documentation > is that the > > timeout > > > is reinitialized after a successful read: > am I > > right? > > > > > > > > > The challenge is now to cut any buffering > occurring > > on the PHP > > > side and let Nginx manage it (since the > buffering > > will occur > > > after content is being read from the > backend). Here > > is what I > > > did: > > > > > > * PHP's zlib.output_compression is > deactivated by > > default in > > > PHP > > > > > > * I deactivated PHP's output_buffering > (default is > > 4096 bytes) > > > > > > * I am using the PHP flush() procedure at > the end of > > each > > > iteration of the copying loop, after a > message is > > written to > > > the output > > > > > > > > > > > > Current state: > > > > > > * The script seems to still be cut after > the > > expiration of the > > > 'fastcgi_read_timout' limit (confirmed by > the error > > log entry > > > 'upstream timed out (110: Connection timed > out) > > while reading > > > upstream') > > > > > > * The PHP loop is entered several times > since > > multiple files > > > have been copied > > > > > > * The output sent to the browser is cut > before any > > output from > > > the loop appears > > > > > > > > > It seems that there is still some unwanted > buffering > > on the > > > PHP side. > > > > > > I also note that the PHP's flush() > procedure doesn't > > seem to > > > work since the output in the browser > doesn't contain > > any > > > message written after eahc file copy. > > > > > > > > > Am I misunderstanding something about > Nginx here > > (especially > > > about the 'fastcgi_read_timeout' > directive)? > > > > > > Have you any intel/piece of advice on hte > matter? > > > > > > Thanks, > > > > > > --- > > > B. R. > > > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Steve Holdoway BSc(Hons) MNZCS > > > http://www.greengecko.co.nz > > MSN: steve at greengecko.co.nz > > Skype: sholdowa > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MNZCS http://www.greengecko.co.nz MSN: steve at greengecko.co.nz Skype: sholdowa From reallfqq-nginx at yahoo.fr Mon May 27 02:54:02 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 26 May 2013 22:54:02 -0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: <1369622662.17578.197.camel@steve-new> References: <1369619164.17578.173.camel@steve-new> <1369621460.17578.192.camel@steve-new> <1369622662.17578.197.camel@steve-new> Message-ID: Thanks for programming 101. I'll keep your advice when my goal will be optimizing my current work, which is not currently the case. I do not simply want something to work here. I am fully capable of finding workarounds whenever I need/want them. I'll leave the 'I do not care how it works as long as it works' motto to business-related goals ;o) I need to understand the PHP/Nginx communication. And having searched for it on the Web showed me a lot of unsatisfaying/dirty workarounds, no real solution/explanation. If anyone could enlighten me on those Nginx timeouts, I'd be more than glad! --- *B. R.* On Sun, May 26, 2013 at 10:44 PM, Steve Holdoway wrote: > OK, I leave you to it. > > However, asynchronously spawning subprocesses *will* allow you to > parallelise the process. I'd call it design, rather than a workaround, > but there you go (: > > Steve > On Sun, 2013-05-26 at 22:38 -0400, B.R. wrote: > > One way or another, even if an external script is called, PHP will > > need to wait for the scripts completion, making the parallelization > > impossible or at least useless (since, to wait for a return code of an > > external script is still blocking). > > > > > > I am not trying to find a workaround, I need to know how the > > fastcgi_reload_timeout works (if I understood it properly), if I > > properly disabled PHP buffering for my example case and how eventually > > to control those timeouts. > > > > I'd like to address the central problem here, not closing my eyes on > > it. > > > > > > --- > > B. R. > > > > > > On Sun, May 26, 2013 at 10:24 PM, Steve Holdoway > > wrote: > > Surely, you're still serialising the transfer with a loop? > > > > On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote: > > > Thanks for your answer. > > > > > > I didn't go into specifics because my problem doesn't rely > > at the > > > application-level logic. > > > > > > What you describe is what my script does already. > > > > > > > > > However in this particular case I have 16 files weighting > > each a few > > > MB which need to be transfered back at once. > > > > > > > > > PHP allocates 30s for each loop turn (far enough to copy the > > file + > > > echo some output message about successes/failed completion). > > > > > > Nginx cuts the execution avec fastcgi_read_timeout time even > > with my > > > efforts to cut down any buffering on PHP side (thus forcing > > the output > > > to be sent to Nginx to reinitialize the timeout counter). > > > > > > That Nginx action is the center of my attention right now. > > How can I > > > get read of it in a scalable fashion (ie no > > fastcgi_read_time = > > > 9999999) ? > > > --- > > > B. R. > > > > > > > > > > > > > > > On Sun, May 26, 2013 at 9:46 PM, Steve Holdoway > > > wrote: > > > Write a script that lists the remote files, then > > checks for > > > the > > > existence of the file locally, and copy it if it > > doesn't > > > exist? That way > > > no internal loop is used - use a different exit code > > to note > > > whether > > > there was one copied, or there were none ready. > > > > > > That way you scale for a single file transfer. > > There's nothing > > > to be > > > gained from looping internally - well > > performance-wise that > > > is. > > > > > > Steve > > > > > > On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote: > > > > No ideas? > > > > > > > > --- > > > > B. R. > > > > > > > > > > > > On Sat, May 25, 2013 at 1:01 PM, B.R. > > > wrote: > > > > Hello, > > > > > > > > > > > > I am trying to understand how > > fastcgi_read_timout > > > works in > > > > Nginx. > > > > > > > > > > > > Here is what I wanna do: > > > > > > > > I list files (few MB each) on a distant > > place which > > > I copy one > > > > by one (loop) on the local disk through > > PHP. > > > > > > > > I do not know the amount of files I need > > to copy, > > > thus I do > > > > not know the total amount of time I need > > for the > > > script to > > > > finish its execution. What I know is that > > I can > > > ensure is a > > > > processing time limit per file. > > > > > > > > I would like my script not to be > > forcefully > > > interrupted by > > > > either sides (PHP or Nginx) before > > completion. > > > > > > > > > > > > > > > > What I did so far: > > > > > > > > - PHP has a 'max_execution_time' of 30s > > (default?). > > > In the > > > > loop copying files, I use the > > set_time_limit() > > > procedure to > > > > reinitialize the limit before each file > > copy, hence > > > each file > > > > processing has 30s to go: way enough! > > > > > > > > > > > > - The problem seems to lie on the Nginx > > side, with > > > the > > > > 'fastcgi_read_timeout' configuration > > entry. > > > > > > > > I can't ensure what maximum time I need, > > and I would > > > like not > > > > to use way-off values such as 2 weeks or 1 > > year > > > there. ;o) > > > > > > > > What I understood from the documentation > > is that the > > > timeout > > > > is reinitialized after a successful read: > > am I > > > right? > > > > > > > > > > > > The challenge is now to cut any buffering > > occurring > > > on the PHP > > > > side and let Nginx manage it (since the > > buffering > > > will occur > > > > after content is being read from the > > backend). Here > > > is what I > > > > did: > > > > > > > > * PHP's zlib.output_compression is > > deactivated by > > > default in > > > > PHP > > > > > > > > * I deactivated PHP's output_buffering > > (default is > > > 4096 bytes) > > > > > > > > * I am using the PHP flush() procedure at > > the end of > > > each > > > > iteration of the copying loop, after a > > message is > > > written to > > > > the output > > > > > > > > > > > > > > > > Current state: > > > > > > > > * The script seems to still be cut after > > the > > > expiration of the > > > > 'fastcgi_read_timout' limit (confirmed by > > the error > > > log entry > > > > 'upstream timed out (110: Connection timed > > out) > > > while reading > > > > upstream') > > > > > > > > * The PHP loop is entered several times > > since > > > multiple files > > > > have been copied > > > > > > > > * The output sent to the browser is cut > > before any > > > output from > > > > the loop appears > > > > > > > > > > > > It seems that there is still some unwanted > > buffering > > > on the > > > > PHP side. > > > > > > > > I also note that the PHP's flush() > > procedure doesn't > > > seem to > > > > work since the output in the browser > > doesn't contain > > > any > > > > message written after eahc file copy. > > > > > > > > > > > > Am I misunderstanding something about > > Nginx here > > > (especially > > > > about the 'fastcgi_read_timeout' > > directive)? > > > > > > > > Have you any intel/piece of advice on hte > > matter? > > > > > > > > Thanks, > > > > > > > > --- > > > > B. R. > > > > > > > > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > > Steve Holdoway BSc(Hons) MNZCS > > > > > http://www.greengecko.co.nz > > > MSN: steve at greengecko.co.nz > > > Skype: sholdowa > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Steve Holdoway BSc(Hons) MNZCS > > http://www.greengecko.co.nz > > MSN: steve at greengecko.co.nz > > Skype: sholdowa > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MNZCS > http://www.greengecko.co.nz > MSN: steve at greengecko.co.nz > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 27 10:19:47 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 May 2013 14:19:47 +0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: Message-ID: <20130527101947.GA72282@mdounin.ru> Hello! On Sat, May 25, 2013 at 01:01:32PM -0400, B.R. wrote: > Hello, > > I am trying to understand how fastcgi_read_timout works in Nginx. > > Here is what I wanna do: > I list files (few MB each) on a distant place which I copy one by one > (loop) on the local disk through PHP. > I do not know the amount of files I need to copy, thus I do not know the > total amount of time I need for the script to finish its execution. What I > know is that I can ensure is a processing time limit per file. > I would like my script not to be forcefully interrupted by either sides > (PHP or Nginx) before completion. > > > What I did so far: > - PHP has a 'max_execution_time' of 30s (default?). In the loop copying > files, I use the set_time_limit() procedure to reinitialize the limit > before each file copy, hence each file processing has 30s to go: way enough! > > - The problem seems to lie on the Nginx side, with the > 'fastcgi_read_timeout' configuration entry. > I can't ensure what maximum time I need, and I would like not to use > way-off values such as 2 weeks or 1 year there. ;o) > What I understood from the > documentationis > that the timeout is reinitialized after a successful read: am I right? Yes. > The challenge is now to cut any buffering occurring on the PHP side and let > Nginx manage it (since the buffering will occur after content is being read > from the backend). Here is what I did: > * PHP's zlib.output_compression is deactivated by default in PHP > * I deactivated PHP's output_buffering (default is 4096 bytes) > * I am using the PHP flush() procedure at the end of each iteration of the > copying loop, after a message is written to the output > > > Current state: > * The script seems to still be cut after the expiration of the > 'fastcgi_read_timout' limit (confirmed by the error log entry 'upstream > timed out (110: Connection timed out) while reading upstream') > * The PHP loop is entered several times since multiple files have been > copied > * The output sent to the browser is cut before any output from the loop > appears > > It seems that there is still some unwanted buffering on the PHP side. > I also note that the PHP's flush() procedure doesn't seem to work since the > output in the browser doesn't contain any message written after eahc file > copy. There is buffering on nginx side, too, which may prevent last part of the response from appearing in the output as seen by a browser. It doesn't explain why read timeout isn't reset though. > Am I misunderstanding something about Nginx here (especially about the > 'fastcgi_read_timeout' directive)? Your understanding looks correct. > Have you any intel/piece of advice on hte matter? You may try looking into debug log, see http://nginx.org/en/docs/debugging_log.html, and/or tcpdump between nginx and php. It should help to examine what actually is seen by nginx from php. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon May 27 12:11:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 May 2013 16:11:43 +0400 Subject: nginx got an 500 error when uploading files In-Reply-To: References: Message-ID: <20130527121143.GB72282@mdounin.ru> Hello! On Sun, May 26, 2013 at 12:03:58PM +0800, baalchina wrote: > Hi everyone, > > I am using nginx 1.4.1 and php-fpm 5.3.25. > > Everything is fine except when I uploading files to server. > > I have a Discuz! x3(a forum system) and when I post threads, except > attachment everything is fine, but when I add a small files(such as 30k > rar,jpg or else), nginx give me a 500 error. > > I checked the nginx/site error, but have no usefule information. nginx only > tell me a got an 500 error, but didn't tell me why. > > I tried to increase the fast_cgi parama: > > fastcgi_connect_timeout 300; > fastcgi_send_timeout 300; > fastcgi_read_timeout 300; > fastcgi_buffer_size 1024k; > fastcgi_buffers 32 64k; > fastcgi_busy_buffers_size 1500k; > fastcgi_temp_file_write_size 1800k; > > But still the same. > > > So, please help me for solve this problem, I wonder where can I got the > error message? If the 500 is returned by nginx, there should be an explanation in error log. If there is nothing in error log - this suggests the 500 response is generated by php, and you have to dig into the php code to see what happens there. -- Maxim Dounin http://nginx.org/en/donation.html From sebelk at gmail.com Mon May 27 14:31:37 2013 From: sebelk at gmail.com (Sergio Belkin) Date: Mon, 27 May 2013 11:31:37 -0300 Subject: Rewriting and proxy problem In-Reply-To: <20130525092716.GE27406@craic.sysops.org> References: <20130525092716.GE27406@craic.sysops.org> Message-ID: 2013/5/25 Francis Daly > On Fri, May 24, 2013 at 02:39:58PM -0300, Sergio Belkin wrote: > > Hi there, > > > I am completeley newbie to nginx > > Welcome. > > The nginx config follows its own logic, which may not match your previous > experiences. When you understand that, you'll have a much better chance > of knowing the configuration you are looking for. > > Yup, I've began to read the documentation :) > One important feature is that one request is handled in one > location. Another is that one http request does not necessarily correspond > to one nginx request. > > In this case... > > you make the request for /demoX, and the best-match location is "location > /demo", and so that is the one that is used. > > > location /demo { > > rewrite ^ /upvc; > > Once that happens, you are using the new internal-to-nginx request > "/upvc", so a new choice for best-match location happens, and the rest > of this location{} block is not relevant. > > > proxy_pass http://127.0.0.1:8080; > > > include fastcgi_params; > > Aside: the fastcgi_params file will typically have content relevant for > when fastcgi_pass is used, not for when proxy_pass is used. > > So, the http request for /demoX leads to the nginx request for /upvc, > which matches this location: > > > location /upvc { > > alias /var/lib/tomcat6/webapps/demo; > > index demo3.jsp; > > expires 1m; > > And here, you say "serve it from the filesystem", so that's what it does. > > (I suspect that you actually get a http redirect to /upvc/, which then > returns the content of /var/lib/tomcat6/webapps/demo/demo3.jsp. Using > "curl" as the browser tends to make clear what is happening.) > > } > Rewrite is working but nginx is not. proxying to tomcat, because of that > returns the jsp file as a plain text file. > > Please could you help me? The hardest part of nginx config that I find, it working out what exactly > you want to have happen for each request. > > From the above sample config, I'm not sure what it is that you want. > > > Perhaps putting the proxy_pass in the "location /upvc" block will work? Or > perhaps removing the rewrite? > I did it, and tried using curl, tomcat complains that it cannot find /upvc. > > If you can describe what behaviour you want, then possibly the nginx > config to achieve it will become clear. > I'd want that when you type http://example.com/upvc proxies the /var/lib/tomcat6/webapps/demo/ demo3.jsp file to tomcat Thanks for your nice explanation > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- -- Sergio Belkin http://www.sergiobelkin.com Watch More TV http://sebelk.blogspot.com LPIC-2 Certified - http://www.lpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebelk at gmail.com Mon May 27 14:44:06 2013 From: sebelk at gmail.com (Sergio Belkin) Date: Mon, 27 May 2013 11:44:06 -0300 Subject: Rewriting and proxy problem In-Reply-To: References: <20130525092716.GE27406@craic.sysops.org> Message-ID: 2013/5/27 Sergio Belkin > > 2013/5/25 Francis Daly > >> On Fri, May 24, 2013 at 02:39:58PM -0300, Sergio Belkin wrote: >> >> Hi there, >> >> > I am completeley newbie to nginx >> >> Welcome. >> >> The nginx config follows its own logic, which may not match your previous >> experiences. When you understand that, you'll have a much better chance >> of knowing the configuration you are looking for. >> >> > Yup, I've began to read the documentation :) > > >> One important feature is that one request is handled in one >> location. Another is that one http request does not necessarily correspond >> to one nginx request. >> >> In this case... >> >> you make the request for /demoX, and the best-match location is "location >> /demo", and so that is the one that is used. >> >> > location /demo { >> > rewrite ^ /upvc; >> >> Once that happens, you are using the new internal-to-nginx request >> "/upvc", so a new choice for best-match location happens, and the rest >> of this location{} block is not relevant. >> >> > proxy_pass http://127.0.0.1:8080; >> >> > include fastcgi_params; >> >> Aside: the fastcgi_params file will typically have content relevant for >> when fastcgi_pass is used, not for when proxy_pass is used. >> >> So, the http request for /demoX leads to the nginx request for /upvc, >> which matches this location: >> >> > location /upvc { >> > alias /var/lib/tomcat6/webapps/demo; >> > index demo3.jsp; >> > expires 1m; >> >> And here, you say "serve it from the filesystem", so that's what it does. >> >> (I suspect that you actually get a http redirect to /upvc/, which then >> returns the content of /var/lib/tomcat6/webapps/demo/demo3.jsp. Using >> "curl" as the browser tends to make clear what is happening.) >> > > > } > > > Rewrite is working but nginx is not. proxying to tomcat, because of that > > returns the jsp file as a plain text file. > > > > Please could you help me? > > The hardest part of nginx config that I find, it working out what exactly >> you want to have happen for each request. >> >> From the above sample config, I'm not sure what it is that you want. >> > > > > >> >> Perhaps putting the proxy_pass in the "location /upvc" block will work? Or >> perhaps removing the rewrite? >> > > > I did it, and tried using curl, tomcat complains that it cannot find > /upvc. > > >> >> If you can describe what behaviour you want, then possibly the nginx >> config to achieve it will become clear. >> > > I'd want that when you type http://example.com/upvc proxies the > /var/lib/tomcat6/webapps/demo/ > demo3.jsp file to tomcat > > > Thanks for your nice explanation > > >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > -- > Anyway it's obvious that results in "not found" tomcat upvc directory does not exist,. Shame on me :'-( -- -- Sergio Belkin http://www.sergiobelkin.com Watch More TV http://sebelk.blogspot.com LPIC-2 Certified - http://www.lpi.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at waeme.net Mon May 27 16:52:49 2013 From: sb at waeme.net (Sergey Budnevitch) Date: Mon, 27 May 2013 20:52:49 +0400 Subject: Performance trouble after migration Squeeze to Wheezy In-Reply-To: References: Message-ID: <09C554B9-0DA0-476D-B8A7-7F28E548832C@waeme.net> On 25 May2013, at 01:16 , B.R. wrote: > Hello, > > First of all I need to emphasize the fact that I know WHeezy is not yet supported. Packages for wheezy and raring have been uploaded. > What I am trying to determine how WHeezy could impact Nginx (compiled for Squeeze) performance. > > Since I made the upgrade, big files are being served in a slow fashion (~200 kiB/s). > The directory serving them is configured with AIO and worked perfectly before system changes. I see no performance degradation in test with aio enabled on VM with wheezy in comparison with VM with squeeze. From reallfqq-nginx at yahoo.fr Mon May 27 17:36:31 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 27 May 2013 13:36:31 -0400 Subject: Performance trouble after migration Squeeze to Wheezy In-Reply-To: <09C554B9-0DA0-476D-B8A7-7F28E548832C@waeme.net> References: <09C554B9-0DA0-476D-B8A7-7F28E548832C@waeme.net> Message-ID: Thanks for the work on Wheezy ;o) Thanks also for the performance tests. I'll look into it. --- *B. R.* On Mon, May 27, 2013 at 12:52 PM, Sergey Budnevitch wrote: > > On 25 May2013, at 01:16 , B.R. wrote: > > > Hello, > > > > First of all I need to emphasize the fact that I know WHeezy is not yet > supported. > > Packages for wheezy and raring have been uploaded. > > > What I am trying to determine how WHeezy could impact Nginx (compiled > for Squeeze) performance. > > > > Since I made the upgrade, big files are being served in a slow fashion > (~200 kiB/s). > > The directory serving them is configured with AIO and worked perfectly > before system changes. > > I see no performance degradation in test with aio enabled on VM with > wheezy in comparison > with VM with squeeze. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 27 19:16:49 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 27 May 2013 20:16:49 +0100 Subject: Rewriting and proxy problem In-Reply-To: References: <20130525092716.GE27406@craic.sysops.org> Message-ID: <20130527191649.GI27406@craic.sysops.org> On Mon, May 27, 2013 at 11:31:37AM -0300, Sergio Belkin wrote: Hi there, > I'd want that when you type http://example.com/upvc proxies the > /var/lib/tomcat6/webapps/demo/ > demo3.jsp file to tomcat Just for clarity, "proxy_pass" proxies to a url, not to a file. So you probably want it to proxy to http://127.0.0.1:8080/demo/demo3.jsp If I've got the mapping of url/filename wrong, adjust it as necessary. You haven't said what any *other* requests to nginx should do, so I will show an example for only exactly what you said. There is not an obvious mapping from /upvc to /demo/demo3.jsp, so I won't include any kind of generic replacement here. You will find very useful information on all of this at http://nginx.org/r/proxy_pass So, in the appropriate server block, you want something like: location = /upvc { rewrite ^ /demo/demo3.jsp break; proxy_pass http://127.0.0.1:8080; } This will handle requests for /upvc and /upvc?something. The documentation should explain why it works. All of the other directives can be added back, when you know why they are needed here. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon May 27 19:44:49 2013 From: nginx-forum at nginx.us (gadh) Date: Mon, 27 May 2013 15:44:49 -0400 Subject: nginx 1.4.1 + 'gzip on' causes download of file instead of displaying it in browser Message-ID: when i use nginx 1.4.1 + 'gzip on', once in every 2 requests i get the index.php (which its output is text/html) being downloaded by the browser as unknown file type instead of displayed in it (the broswer displays the "download & save" window and writes in this window that its content type is 'application/octet-stream') after i download it (firefox saves it with file extension 'part') - i see in the file the response header (200 OK etc.) + body chunked & still gzipped. when not using gzip - all works fine. when switching back to nginx 1.2.8 it never happens - with same configuration / site / request. i get to the site thru proxy module, using HTTP/1.1. thanks in advance Gad Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239608,239608#msg-239608 From nginx-forum at nginx.us Mon May 27 22:41:07 2013 From: nginx-forum at nginx.us (cachito) Date: Mon, 27 May 2013 18:41:07 -0400 Subject: SPDY certificates and Wordpress multisite Message-ID: Hello. I manage a small blog network (17) that has a lot of traffic (20MM visits/month). I use Wordpress Multisite to manage it, each blog has its own domain name, and all are served from the same WP install. I'm thinking about implementing SPDY to speed up the sites, I know I need SSL certificates for this to work. 1) Will I need a certificate for each website? Or just one certificate for the main site to encrypt the connection and that's it? 2) I have a single server directive holding most of the configuration stuff with 'default_server', and then some individual settings for each site, mostly www-to-no-www redirects and legacy url rewrites. If I need a certificate for each website, do I need to replicate all the wordpress config for each domain, having a complete server directive with all WP + PHP stuff in it for each domain? Does this impact nginx's performance in any way? This one's just lazy: is the RPM package hosted in the nginx repo (for yum in Fedora/CentOS, etc) compiled with SPDY on or I'll need to compile my own version? Thanks in advance. Cheers! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239609,239609#msg-239609 From contact at jpluscplusm.com Mon May 27 22:59:33 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 27 May 2013 23:59:33 +0100 Subject: SPDY certificates and Wordpress multisite In-Reply-To: References: Message-ID: I don't use SPDY, so take what I say as being from an SSL perspective, not a SPDY one. If your 17 blogs live under the same domain that you own, you could look at using a wildcard SSL cert. You'd only need a single IP/server{} combo for each wildcard cert, and a single wildcard cert for each unique domain suffix you own. If that's not helpful, and the domains are all different, then you could look at a UCC (or SaN) SSL cert. This would also allow you to use a single IP/server{} block, but would probably be uneconomical if your domain list changes frequently. Finally, if you need/get a separate SSL cert for each domain, you will need a distinct IP and server{} block for each. 17 domains with SSL and a separate server{} for each will not affect performance, IME. You'll just proxy_pass to the same backend from each. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Tue May 28 03:59:04 2013 From: nginx-forum at nginx.us (Sylvia) Date: Mon, 27 May 2013 23:59:04 -0400 Subject: SPDY certificates and Wordpress multisite In-Reply-To: References: Message-ID: regarding recompile question: SPDY is supported with OpenSSL 1.0.1 so if your distro using earlier version, SPDY will not be supported, you can check "nginx -V" if SPDY been enabled. When recompiling you can use openssl source package and link it statically to nginx if needed --with-http_spdy_module --with-openssl=/path/to/unpacked/source/openssl-1.0.1e Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239609,239611#msg-239611 From nginx-forum at nginx.us Tue May 28 08:03:07 2013 From: nginx-forum at nginx.us (jesse) Date: Tue, 28 May 2013 04:03:07 -0400 Subject: safe to call ngx_http_subrequest inside location handler? Message-ID: Hello, So according to evan miller's tutorial and nginx's own source code, it seems only filters can send subrequests. But I have tried to call ngx_http_subrequest inside my location handler, and everything seems working fine. I am using version 1.2.8. Just want to ask and see if it's safe to do so. Thanks a lot! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239619,239619#msg-239619 From mr.kingcasino at gmail.com Tue May 28 08:14:16 2013 From: mr.kingcasino at gmail.com (Mr.Kingcasino) Date: Tue, 28 May 2013 15:14:16 +0700 Subject: How to convert Lighttpd rewrite to NGinx rewrite Message-ID: <003801ce5b7b$5b0d33e0$11279ba0$@gmail.com> Hi All, I have a problem in convert Lighttpd rules to Nginx rewrite This is my Lighttpd rules "^/pthumb/([^\/]+)\/(.+)$" => "/pthumb/index.php?$1&f=$2" And this is Nginx Rewrite converted from above Lighttpd rules rewrite ^/pthumb/([^\/]+)\/(.+)$ /pthumb/index.php?$1&f=$2 But is not work.It very hard for me. Please help me convert this rules. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue May 28 08:24:22 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 28 May 2013 09:24:22 +0100 Subject: How to convert Lighttpd rewrite to NGinx rewrite In-Reply-To: <003801ce5b7b$5b0d33e0$11279ba0$@gmail.com> References: <003801ce5b7b$5b0d33e0$11279ba0$@gmail.com> Message-ID: I don't believe you need to escape your backslashes like you're doing, though I'm not sure that's you actual problem. It would help if you gave some more detail about what "not working" looks like. How are you testing? What do you see? Etc etc. Cheers, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Tue May 28 08:52:14 2013 From: nginx-forum at nginx.us (gadh) Date: Tue, 28 May 2013 04:52:14 -0400 Subject: nginx 1.4.1 + 'gzip on' causes download of file instead of displaying it in browser In-Reply-To: References: Message-ID: <46ace552e1c2b3314dae9741a9922ea5.NginxMailingListEnglish@forum.nginx.org> i found the bug - the web server returned in "Content-Type" header just "text/html" and not added "charset=UTF-8". why text/html is not enough ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239608,239622#msg-239622 From maxim at nginx.com Tue May 28 10:14:14 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 28 May 2013 14:14:14 +0400 Subject: Debian 7 In-Reply-To: <51921FB3.3080906@geistert.info> References: <51921FB3.3080906@geistert.info> Message-ID: <51A48376.7000300@nginx.com> On 5/14/13 3:27 PM, David Geistert wrote: > Hey, > I only want to ask, when the Debian Wheezy package will be released > in http://nginx.org/packages/debian/ > As a follow-up: http://mailman.nginx.org/pipermail/nginx/2013-May/039096.html http://nginx.org/en/linux_packages.html -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/services.html From mdounin at mdounin.ru Tue May 28 10:33:03 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 May 2013 14:33:03 +0400 Subject: nginx 1.4.1 + 'gzip on' causes download of file instead of displaying it in browser In-Reply-To: <46ace552e1c2b3314dae9741a9922ea5.NginxMailingListEnglish@forum.nginx.org> References: <46ace552e1c2b3314dae9741a9922ea5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130528103303.GO72282@mdounin.ru> Hello! On Tue, May 28, 2013 at 04:52:14AM -0400, gadh wrote: > i found the bug - the web server returned in "Content-Type" header just > "text/html" and not added "charset=UTF-8". > why text/html is not enough ? For nginx, there should be no difference. But it looks like you are debugging complete system which includes backend software, nginx and a browser. If you think that the problem is in nginx - first of all I would recommend you to look into debug log. See http://nginx.org/en/docs/debugging_log.html for details. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue May 28 10:52:47 2013 From: nginx-forum at nginx.us (tonimarmol) Date: Tue, 28 May 2013 06:52:47 -0400 Subject: 404 on Prestashop 1.5 under nginx Message-ID: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> I have a problem with nginx giving a 404 error on a Prestashop 1.5.4.1 site. This is the URL returning the 404: www.domain.com/es/index.php?controller=order-confirmation The prestashop is under multilanguage enviroment. Then I have: www.domain.com/en/ www.domain.com/es/ www.domain.com/fr/ www.domain.com/de/ The URL rewriting runs fine, except when the url have the "index.php". Then nginx returns de 404. I think the problem is on nginx virtualhost configuration, but I don't know what it's failing. My nginx configuration: http://pastebin.com/CFQ5hwNX That nginx configuration runs perfect on a Prestashop with only 1 domain (without the /lang/ on the url). Nginx version: 1.4.0 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,239630#msg-239630 From mdounin at mdounin.ru Tue May 28 10:55:10 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 May 2013 14:55:10 +0400 Subject: safe to call ngx_http_subrequest inside location handler? In-Reply-To: References: Message-ID: <20130528105510.GQ72282@mdounin.ru> Hello! On Tue, May 28, 2013 at 04:03:07AM -0400, jesse wrote: > Hello, > > So according to evan miller's tutorial and nginx's own source code, it seems > only filters can send subrequests. But I have tried to call > ngx_http_subrequest inside my location handler, and everything seems working > fine. I am using version 1.2.8. > > Just want to ask and see if it's safe to do so. This mostly works since 0.7.25. There are some subtleties with request body handling though. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue May 28 13:13:37 2013 From: nginx-forum at nginx.us (tonimarmol) Date: Tue, 28 May 2013 09:13:37 -0400 Subject: 404 on Prestashop 1.5 under nginx In-Reply-To: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> References: <11dd6835df4553fc33fe979e715af0b6.NginxMailingListEnglish@forum.nginx.org> Message-ID: The problem is solved. The issue was on Prestashop. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239630,239633#msg-239633 From nginx-forum at nginx.us Tue May 28 17:19:32 2013 From: nginx-forum at nginx.us (jesse) Date: Tue, 28 May 2013 13:19:32 -0400 Subject: safe to call ngx_http_subrequest inside location handler? In-Reply-To: <20130528105510.GQ72282@mdounin.ru> References: <20130528105510.GQ72282@mdounin.ru> Message-ID: I see. Thanks, Maxim! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239619,239635#msg-239635 From reallfqq-nginx at yahoo.fr Tue May 28 17:32:48 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 28 May 2013 13:32:48 -0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: <20130527101947.GA72282@mdounin.ru> References: <20130527101947.GA72282@mdounin.ru> Message-ID: Hello Maxim, I spent a lot of time trying to figure out what is happening. It seems that after some service restart, the problem sometimes disappear before coming back again on the following try. I finally managed to capture the debug log you'll find as attachment. I'll need your expertise on it, but it seems that the tcpdump show stuff which do not appear in the nginx output. T ?he archive attached is as much self-contained as possible, including : - Server information (uname -a) - Nginx information (nginx -V)?: self-compiled from sources since I needed to activate --with-debug - tcpdump between php and nginx (+ 'control' file containing the standard output of the tcpdump command, including interface and packet numbers information only) - nginx error_log (set on 'debug) - browser output (copied-pasted from source, you'll see there is no end tag, thus proving the output is brutally cut out) If I might be of any help providing some other information, please let me know --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_abort.tar.bz2 Type: application/x-bzip2 Size: 8699 bytes Desc: not available URL: From nginx-forum at nginx.us Tue May 28 18:35:33 2013 From: nginx-forum at nginx.us (ZyntraX) Date: Tue, 28 May 2013 14:35:33 -0400 Subject: Error message: invalid number of arguments in "proxy_pass" directive ... Message-ID: Hello guys I'm using nginx as a load balancer between 2 apache webservers for a school assignment, but i can't get it working. My nginx server (running on ubuntu server) won't start... He keeps giving met this error: [emerg] invalid number of arguments in "proxy_pass" directive in /etc/nginx/sites-enabled/default.save:35 I googled this error already but i couldn't find the solution... This is my configuration: http://puu.sh/337yk/b597e5fe18.png (it's a screenshot, i know there is a missing "}" , my screen resolution isn't that big). Can you guys please help me? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239638,239638#msg-239638 From francis at daoine.org Tue May 28 21:10:05 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 28 May 2013 22:10:05 +0100 Subject: Error message: invalid number of arguments in "proxy_pass" directive ... In-Reply-To: References: Message-ID: <20130528211005.GJ27406@craic.sysops.org> On Tue, May 28, 2013 at 02:35:33PM -0400, ZyntraX wrote: Hi there, > He keeps giving met this error: > [emerg] invalid number of arguments in "proxy_pass" directive in > /etc/nginx/sites-enabled/default.save:35 What is on line 35 of the file /etc/nginx/sites-enabled/default.save? How does that match what proxy_pass expects? http://nginx.org/r/proxy_pass > Can you guys please help me? Perhaps you just want to "rm /etc/nginx/sites-enabled/default.save", and then make sure that your editor doesn't create backup files that match the "include" pattern in your nginx.conf? f -- Francis Daly francis at daoine.org From ctrlbrk at yahoo.com Wed May 29 05:03:41 2013 From: ctrlbrk at yahoo.com (Mike) Date: Wed, 29 May 2013 00:03:41 -0500 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter Message-ID: <132101ce5c29$e402db10$ac089130$@yahoo.com> Hi, Running 1.4.1 and using SPDY and SSL extensively (exclusively, actually). Seeing a ton (thousands) of errors in the error log, similar to: 2013/05/29 00:22:53 [alert] 25981#0: *8781732 FIXME: chain too big in spdy filter: 25516336 while sending to client, client: x.x.x.x, server: x.y.z, request: "GET /attachments/113914 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "x.y.z", referrer: "a.b.c" Names changed to protect the innocent. What is happening, and how to solve? Users are also reporting that larger file downloads are failing with "Network error" or timeout errors, something that never happened prior to using SPDY or SSL. Since I am running an extended validation SSL certificate, I opted to make *all* urls on my site SSL, so yes that means attachments are coming thru SSL as well. Some help? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hagaia at qwilt.com Wed May 29 11:49:46 2013 From: hagaia at qwilt.com (Hagai Avrahami) Date: Wed, 29 May 2013 14:49:46 +0300 Subject: Content-Length header with HEAD response Message-ID: Hi I am trying to configure Nginx to deny HTTP HEAD requests 1. By adding the following to configuration file if ($request_method !~ ^(GET)$) { return 405; } 2. Explicitly in the module if (!(r->method & (NGX_HTTP_GET))) { return NGX_HTTP_NOT_ALLOWED; } Nginx returns 405 status code but the response content length is not 0 it's counting the error page text but when coming to send the response it ignores the body because it is HEAD request HTTP/1.1 405 Not Allowed Server: nginx Date: Wed, 29 May 2013 11:35:02 GMT Content-Type: text/html Content-Length: 161 Connection: keep-alive ***No-Body** Please Advise Thanks Hagai Avrahami -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 29 12:10:52 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 May 2013 16:10:52 +0400 Subject: Content-Length header with HEAD response In-Reply-To: References: Message-ID: <20130529121051.GX72282@mdounin.ru> Hello! On Wed, May 29, 2013 at 02:49:46PM +0300, Hagai Avrahami wrote: > Hi > > I am trying to configure Nginx to deny HTTP HEAD requests > > 1. By adding the following to configuration file > > if ($request_method !~ ^(GET)$) { > return 405; > } > > 2. Explicitly in the module > > if (!(r->method & (NGX_HTTP_GET))) { > return NGX_HTTP_NOT_ALLOWED; > } > > Nginx returns 405 status code but the response content length is not 0 > it's counting the error page text but when coming to send the response it > ignores the body because it is HEAD request > > HTTP/1.1 405 Not Allowed > Server: nginx > Date: Wed, 29 May 2013 11:35:02 GMT > Content-Type: text/html > Content-Length: 161 > Connection: keep-alive > > ***No-Body** And the question is? The behaviour you observe is correct as per HTTP protocol. -- Maxim Dounin http://nginx.org/en/donation.html From hagaia at qwilt.com Wed May 29 12:52:38 2013 From: hagaia at qwilt.com (Hagai Avrahami) Date: Wed, 29 May 2013 15:52:38 +0300 Subject: Content-Length header with HEAD response In-Reply-To: <20130529121051.GX72282@mdounin.ru> References: <20130529121051.GX72282@mdounin.ru> Message-ID: Hi I thought HEAD should behave as GET only in case of success after reading the RFC I understand it's should be the same on any response Thanks On Wed, May 29, 2013 at 3:10 PM, Maxim Dounin wrote: > Hello! > > On Wed, May 29, 2013 at 02:49:46PM +0300, Hagai Avrahami wrote: > > > Hi > > > > I am trying to configure Nginx to deny HTTP HEAD requests > > > > 1. By adding the following to configuration file > > > > if ($request_method !~ ^(GET)$) { > > return 405; > > } > > > > 2. Explicitly in the module > > > > if (!(r->method & (NGX_HTTP_GET))) { > > return NGX_HTTP_NOT_ALLOWED; > > } > > > > Nginx returns 405 status code but the response content length is not 0 > > it's counting the error page text but when coming to send the response it > > ignores the body because it is HEAD request > > > > HTTP/1.1 405 Not Allowed > > Server: nginx > > Date: Wed, 29 May 2013 11:35:02 GMT > > Content-Type: text/html > > Content-Length: 161 > > Connection: keep-alive > > > > ***No-Body** > > And the question is? The behaviour you observe is correct as per > HTTP protocol. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Hagai Avrahami* Qwilt | Work: +972-72-2221644| Mobile: +972-54-4895656 | hagaia at qwilt.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed May 29 13:26:00 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 29 May 2013 17:26:00 +0400 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter In-Reply-To: <132101ce5c29$e402db10$ac089130$@yahoo.com> References: <132101ce5c29$e402db10$ac089130$@yahoo.com> Message-ID: <201305291726.00628.vbart@nginx.com> On Wednesday 29 May 2013 09:03:41 Mike wrote: [...] > > 2013/05/29 00:22:53 [alert] 25981#0: *8781732 FIXME: chain too big in spdy > filter: 25516336 while sending to client, client: x.x.x.x, server: x.y.z, > request: "GET /attachments/113914 HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9001", host: "x.y.z", referrer: "a.b.c" > [...] > > Some help? You have configured too big buffers, that currently cannot be handled in spdy module. So, you should either decrease the amount of buffers involved in request processing, or disable spdy. It's hard to tell what buffers you have to tune without knowing of your configuration, but from the error message I assume that "fastcgi_buffers". With the default configuration it works well. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-list at puzzled.xs4all.nl Wed May 29 14:35:03 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Wed, 29 May 2013 16:35:03 +0200 Subject: Howto force text type of dir/subdir/file? Message-ID: <51A61217.7090108@puzzled.xs4all.nl> Hi, On a CentOS 6.4 box with nginx 1.4.1 I would like to expose /usr/share/doc/* as /doc so I can easily browse the docs from the installed packages. Config I have right now (thanks Nodex on irc for pointing me to types): # docs location /doc { alias /usr/share/doc/; autoindex on; types { } default_type text/plain; } This works fine when I browse to for example: https:///doc/procmail-3.22/examples/1procmailrc But it does not work for: https:///doc/postfix-2.6.6/README_FILES/AAAREADME Not work meaning Firefox offers the AAAREADME file for download. # file AAAREADME AAAREADME: ASCII text, with overstriking Anyone have a hint what I am doing wrong? Thanks! Patrick From artemrts at ukr.net Wed May 29 16:11:06 2013 From: artemrts at ukr.net (wishmaster) Date: Wed, 29 May 2013 19:11:06 +0300 Subject: Howto force text type of dir/subdir/file? In-Reply-To: <51A61217.7090108@puzzled.xs4all.nl> References: <51A61217.7090108@puzzled.xs4all.nl> Message-ID: <71244.1369843866.2249410074073038848@ffe15.ukr.net> --- Original message --- From: "Patrick Lists" Date: 29 May 2013, 17:35:18 > Hi, > > On a CentOS 6.4 box with nginx 1.4.1 I would like to expose > /usr/share/doc/* as /doc so I can easily browse the docs from > the installed packages. > > Config I have right now (thanks Nodex on irc for pointing me to types): > > # docs > location /doc { > alias /usr/share/doc/; > autoindex on; > types { } > default_type text/plain; > } What is in log? Also, from nginx docs: When location matches the last part of the directive?s value: location /images/ { alias /data/w3/images/; } it is better to use the root directive instead: location /images/ { root /data/w3; } From nginx-list at puzzled.xs4all.nl Wed May 29 16:29:15 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Wed, 29 May 2013 18:29:15 +0200 Subject: Howto force text type of dir/subdir/file? In-Reply-To: <71244.1369843866.2249410074073038848@ffe15.ukr.net> References: <51A61217.7090108@puzzled.xs4all.nl> <71244.1369843866.2249410074073038848@ffe15.ukr.net> Message-ID: <51A62CDB.3080309@puzzled.xs4all.nl> On 05/29/2013 06:11 PM, wishmaster wrote: [snip] > What is in log? In the access.log: Working ok when browsing a procmail file: - - [29/May/2013:18:15:15 +0200] "GET /doc/procmail-3.22/examples/ HTTP/1.1" 200 1597 "https:///doc/procmail-3.22/" "Mozilla/5.0 (X11; Linux x86_64; rv:21.0) Gecko/20100101 Firefox/21.0" Not working when browsing a Postfix file: - - [29/May/2013:18:17:00 +0200] "GET /doc/postfix-2.6.6/README_FILES/INSTALL HTTP/1.1" 200 33503 "https:///doc/postfix-2.6.6/README_FILES/" "Mozilla/5.0 (X11; Linux x86_64; rv:21.0) Gecko/20100101 Firefox/21.0" There is nothing in the error log. > Also, from nginx docs: > > When location matches the last part of the directive?s value: > > location /images/ { > alias /data/w3/images/; > } > > it is better to use the root directive instead: > > location /images/ { > root /data/w3; > } Thanks for the tip. Changed my config to: # docs location /doc/ { #alias /usr/share/doc/; root /usr/share; autoindex on; types { } default_type text/plain; } I still have the same issue in the postfix subdir though. Regards, Patrick From mdounin at mdounin.ru Wed May 29 16:46:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 May 2013 20:46:27 +0400 Subject: fastcgi_read_timeout with PHP backend In-Reply-To: References: <20130527101947.GA72282@mdounin.ru> Message-ID: <20130529164627.GF72282@mdounin.ru> Hello! On Tue, May 28, 2013 at 01:32:48PM -0400, B.R. wrote: > Hello Maxim, > > I spent a lot of time trying to figure out what is happening. > It seems that after some service restart, the problem sometimes disappear > before coming back again on the following try. > > I finally managed to capture the debug log you'll find as attachment. I'll > need your expertise on it, but it seems that the tcpdump show stuff which > do not appear in the nginx output. > T > ?he archive attached is as much self-contained as possible, including : > - Server information (uname -a) > - Nginx information (nginx -V)?: self-compiled from sources since I needed > to activate --with-debug > - tcpdump between php and nginx (+ 'control' file containing the standard > output of the tcpdump command, including interface and packet numbers > information only) > - nginx error_log (set on 'debug) > - browser output (copied-pasted from source, you'll see there is no end > tag, thus proving the output is brutally cut out) As per debug log, nothing is seen from php after 18:48:45, and this results in the timeout at 18:50:45. Unfortunately, tcpdump dump provided looks corrupted - it shows only first 4 packets both here and on cloudshark (http://cloudshark.org/captures/bf44d289b1f6). Overral I would suggest that this is just how you code behaves. You may want to add some debugging to your application to debug this further if you still think there is something wrong. -- Maxim Dounin http://nginx.org/en/donation.html From francis at daoine.org Wed May 29 22:37:18 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 29 May 2013 23:37:18 +0100 Subject: Howto force text type of dir/subdir/file? In-Reply-To: <51A61217.7090108@puzzled.xs4all.nl> References: <51A61217.7090108@puzzled.xs4all.nl> Message-ID: <20130529223718.GK27406@craic.sysops.org> On Wed, May 29, 2013 at 04:35:03PM +0200, Patrick Lists wrote: Hi there, > This works fine when I browse to for example: > https:///doc/procmail-3.22/examples/1procmailrc > > But it does not work for: > > https:///doc/postfix-2.6.6/README_FILES/AAAREADME > > Not work meaning Firefox offers the AAAREADME file for download. What does curl -I https:///doc/postfix-2.6.6/README_FILES/AAAREADME show as the Content-Type? If it is "text/plain", your nginx is doing what you told it to do. Strictly, I suspect that the file is not text/plain, since the control characters are not normally printable. So Firefox is probably right to indicate that it is unable to present the file contents correctly, since they do not match the declared type. > # file AAAREADME > AAAREADME: ASCII text, with overstriking I've just tested on a different server, using a file which "file" identifies in that same way. The Content-Type: header is text/plain. > Anyone have a hint what I am doing wrong? My guess is: claiming that a non-text/plain file is text/plain. nginx doesn't care; it just does what it is told. Firefox does care. The simplest thing is probably for you to edit the file to make it be text/plain. And do the same for any other similar file. "overstriking" is usually having the three character sequence X ^H X instead of the single X. So remove the ^H and one X each time, and you should have a real text/plain file. And perhaps invite whoever put the file there to only use plain text in the next version of the package. f -- Francis Daly francis at daoine.org From nginx-list at puzzled.xs4all.nl Thu May 30 00:27:00 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Thu, 30 May 2013 02:27:00 +0200 Subject: Howto force text type of dir/subdir/file? In-Reply-To: <20130529223718.GK27406@craic.sysops.org> References: <51A61217.7090108@puzzled.xs4all.nl> <20130529223718.GK27406@craic.sysops.org> Message-ID: <51A69CD4.5060704@puzzled.xs4all.nl> Hi Francis, On 05/30/2013 12:37 AM, Francis Daly wrote: > On Wed, May 29, 2013 at 04:35:03PM +0200, Patrick Lists wrote: > > Hi there, [snip] > > What does > > curl -I https:///doc/postfix-2.6.6/README_FILES/AAAREADME > > show as the Content-Type? > > If it is "text/plain", your nginx is doing what you told it to do. Yup seems so: HTTP/1.1 200 OK Server: nginx Date: Thu, 30 May 2013 00:23:22 GMT Content-Type: text/plain Content-Length: 2768 Last-Modified: Sat, 03 Dec 2011 05:00:55 GMT Connection: keep-alive Keep-Alive: timeout=10 ETag: "4ed9ad07-ad0" X-Frame-Options: SAMEORIGIN Accept-Ranges: bytes > Strictly, I suspect that the file is not text/plain, since the control > characters are not normally printable. > > So Firefox is probably right to indicate that it is unable to present > the file contents correctly, since they do not match the declared type. > >> # file AAAREADME >> AAAREADME: ASCII text, with overstriking > > I've just tested on a different server, using a file which "file" > identifies in that same way. The Content-Type: header is text/plain. > >> Anyone have a hint what I am doing wrong? > > My guess is: claiming that a non-text/plain file is text/plain. nginx > doesn't care; it just does what it is told. Firefox does care. > > The simplest thing is probably for you to edit the file to make it be > text/plain. And do the same for any other similar file. > > "overstriking" is usually having the three character sequence X ^H X > instead of the single X. So remove the ^H and one X each time, and you > should have a real text/plain file. > > And perhaps invite whoever put the file there to only use plain text in > the next version of the package. Understand now, thanks. To me it was just a text file like the other real ones. It's the postfix documentation that has the overstrike thingy. I doubt Wietse is going to change that anytime soon. No biggie though. The postfix website has all the docs too. Cheers, Patrick From nginx-forum at nginx.us Thu May 30 01:32:51 2013 From: nginx-forum at nginx.us (ctrlbrk) Date: Wed, 29 May 2013 21:32:51 -0400 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter In-Reply-To: <201305291726.00628.vbart@nginx.com> References: <201305291726.00628.vbart@nginx.com> Message-ID: <48983276b80cbb71169571494425658e.NginxMailingListEnglish@forum.nginx.org> What is the maximum amount of the buffer? I could not locate any documentation on any of this with regards to how it would cause an error on SPDY. And doesn't "FIXME" indicate this is a bug? Why would it be in the code otherwise? I already removed SPDY and the errors went away, as did the problems users were experiencing downloading larger files. But I would like to add it back. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239647,239671#msg-239671 From nginx-forum at nginx.us Thu May 30 10:02:00 2013 From: nginx-forum at nginx.us (angelochen960) Date: Thu, 30 May 2013 06:02:00 -0400 Subject: newbie needs help Message-ID: Hi, new in ths nginx, with following config, I want to achieve: http://sample.com/ render index.html http://sample.com/test, or anything after '/' path, render index.html server { listen 80; server_name sample.com ; root /var/www/sample/public_html; location / { index index.html index.htm; } location ~ ^/.* { index index.html; } } but when I do this, and got 404 error, anything missing here? Thanks Angelo curl -I http://sample.com/test HTTP/1.1 404 Not Found Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239683,239683#msg-239683 From vbart at nginx.com Thu May 30 10:25:25 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 30 May 2013 14:25:25 +0400 Subject: newbie needs help In-Reply-To: References: Message-ID: <201305301425.25570.vbart@nginx.com> On Thursday 30 May 2013 14:02:00 angelochen960 wrote: > Hi, > new in ths nginx, with following config, I want to achieve: > > http://sample.com/ render index.html > http://sample.com/test, or anything after '/' path, render index.html > server { listen 80; server_name sample.com; root /var/www/sample/public_html; location / { try_files /index.html =404; } } > server { > listen 80; > server_name sample.com ; > root /var/www/sample/public_html; > > location / { > index index.html index.htm; > } > > location ~ ^/.* { > index index.html; > } > > > } > > but when I do this, and got 404 error, anything missing here? Thanks Angelo > > curl -I http://sample.com/test > > HTTP/1.1 404 Not Found > Since your config looks incorrect, it seems you are missing understanding of how nginx works. You should try to read documentation: - http://nginx.org/en/docs/http/request_processing.html - http://nginx.org/r/location - http://nginx.org/r/index - http://nginx.org/r/try_files wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu May 30 10:41:50 2013 From: nginx-forum at nginx.us (kapouer) Date: Thu, 30 May 2013 06:41:50 -0400 Subject: if-none-match with proxy_cache : properly set headers Message-ID: <64ea355f819de5e1aac4752687f9a5e4.NginxMailingListEnglish@forum.nginx.org> Hi, i struggled a little to get nginx to cache 304 responses from backend using proxy_cache. What happens when configuring proxy_cache is that 304 responses are not happening because nginx strips If-None-Match request headers. It is a workaround to prevent the client from getting an empty response event if he did not send If-None-Match in the header. A better workaround can be : proxy_cache_key $http_if_none_match$scheme$proxy_host$request_uri; So that the cache sends 304 if the header is properly set, and 200 if it isn't. Of course one has to kill the first workaround : proxy_set_header If-None-Match $http_if_none_match; and cache both responses : proxy_cache_valid 200 304 1h; Comments welcome. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239689,239689#msg-239689 From mdounin at mdounin.ru Thu May 30 11:21:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 May 2013 15:21:44 +0400 Subject: if-none-match with proxy_cache : properly set headers In-Reply-To: <64ea355f819de5e1aac4752687f9a5e4.NginxMailingListEnglish@forum.nginx.org> References: <64ea355f819de5e1aac4752687f9a5e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130530112144.GR72282@mdounin.ru> Hello! On Thu, May 30, 2013 at 06:41:50AM -0400, kapouer wrote: > Hi, > i struggled a little to get nginx to cache 304 responses from backend using > proxy_cache. > What happens when configuring proxy_cache is that 304 responses are not > happening because > nginx strips If-None-Match request headers. It is a workaround to prevent > the client from getting > an empty response event if he did not send If-None-Match in the header. > A better workaround can be : > > proxy_cache_key $http_if_none_match$scheme$proxy_host$request_uri; > > So that the cache sends 304 if the header is properly set, and 200 if it > isn't. > Of course one has to kill the first workaround : > > proxy_set_header If-None-Match $http_if_none_match; > > and cache both responses : > > proxy_cache_valid 200 304 1h; > > Comments welcome. Normally you shouldn't cache 304 responses from a backend, but rather cache 200 responses from a backend and let nginx to return 304 by its own. This is how it works by default. Do you have problems with the default aproach? -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu May 30 12:25:18 2013 From: nginx-forum at nginx.us (angelochen960) Date: Thu, 30 May 2013 08:25:18 -0400 Subject: newbie needs help In-Reply-To: <201305301425.25570.vbart@nginx.com> References: <201305301425.25570.vbart@nginx.com> Message-ID: <61f944be1128210c0ddf0c87b3bc5b82.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply, it works, and also I read again those references. a related issue, say: if somebody enter this url in the browser: http://sample.com/not_exist_url and I'd like to redirect it to http://sample.com/ with the try_files approach, index.html got displayed, that's right, but the url in browser still remain as http://sample.com/not_exist_url, i'm looking for a 302 i believe, any suggestions? thanks. Angelo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239683,239695#msg-239695 From vbart at nginx.com Thu May 30 13:47:10 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 30 May 2013 17:47:10 +0400 Subject: newbie needs help In-Reply-To: <61f944be1128210c0ddf0c87b3bc5b82.NginxMailingListEnglish@forum.nginx.org> References: <201305301425.25570.vbart@nginx.com> <61f944be1128210c0ddf0c87b3bc5b82.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201305301747.10284.vbart@nginx.com> On Thursday 30 May 2013 16:25:18 angelochen960 wrote: > Thanks for the reply, it works, and also I read again those references. a > related issue, say: > > if somebody enter this url in the browser: http://sample.com/not_exist_url > > and I'd like to redirect it to > > http://sample.com/ > > with the try_files approach, index.html got displayed, that's right, but > the url in browser still remain as http://sample.com/not_exist_url, i'm > looking for a 302 i believe, any suggestions? thanks. > Yes, you're looking for an external redirect, that is completely different thing. Then this config will serve your needs: server { listen 80; server_name sample.com; root /var/www/sample/public_html; location = / { try_files /index.html =404; } location / { return 302 /; } } Reference: - http://nginx.org/r/return - http://nginx.org/r/try_files - http://nginx.org/r/location wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu May 30 14:19:44 2013 From: nginx-forum at nginx.us (angelochen960) Date: Thu, 30 May 2013 10:19:44 -0400 Subject: newbie needs help In-Reply-To: <201305301747.10284.vbart@nginx.com> References: <201305301747.10284.vbart@nginx.com> Message-ID: <2067508be0f6e24217b085239e85034e.NginxMailingListEnglish@forum.nginx.org> that works, thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239683,239699#msg-239699 From vbart at nginx.com Thu May 30 15:16:03 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 30 May 2013 19:16:03 +0400 Subject: 1.4.1 SPDY error FIXME: chain too big in spdy filter In-Reply-To: <48983276b80cbb71169571494425658e.NginxMailingListEnglish@forum.nginx.org> References: <201305291726.00628.vbart@nginx.com> <48983276b80cbb71169571494425658e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201305301916.03455.vbart@nginx.com> On Thursday 30 May 2013 05:32:51 ctrlbrk wrote: > What is the maximum amount of the buffer? Currently the total amount of operational buffers cannot exceed 2^24 - 1 bytes (in other words must be less than 16 MiB). > I could not locate any documentation on any of this with regards to how it would cause an error on SPDY. There is more then one way to trigger this error. The documentation just states that the module is experimental. When you are using an experimental module that implements experimental protocol draft then you have to be ready for any kind of errors, and not only caused by the module itself, but in browsers too. Though you are probably right, and this drawback should be documented somehow explicitly. > > And doesn't "FIXME" indicate this is a bug? Why would it be in the code > otherwise? That part of code lacks chain/buffer splitting mechanism as well as the DATA frame adjusting feature. It will be implemented, but we have no ETA for this yet. wbr, Valentin V. Bartenev -- http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu May 30 16:14:09 2013 From: nginx-forum at nginx.us (mafious) Date: Thu, 30 May 2013 12:14:09 -0400 Subject: [LB]Bad root document set via proxy_pass Message-ID: Hello everybody, I am using several instances of Nginx (1.4.1). One as a LoadBalancer and the others to host rails applications (using module passenger). Depending of the url, I forward to the proper backend: location / { proxy_pass http://_cluster; } But via the proxy, the web page of my application is not correctly render. The path to load images files is incorrect and point to the default root document location. Direct access to my application is fine so I guess I miss something in the load balancer configuration but what ? Before that when I didn't need to precise the url in the location directive, it worked fine via the load balancer as well. Cheers, Mat. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239702,239702#msg-239702 From nginx-forum at nginx.us Thu May 30 20:40:25 2013 From: nginx-forum at nginx.us (lilyevsky) Date: Thu, 30 May 2013 16:40:25 -0400 Subject: Ldap authentication passing to tomcat Message-ID: <3f4c7c1d33916d9e64fe9198d98bbcf0.NginxMailingListEnglish@forum.nginx.org> I am using nginx 1.4.1 as reverse proxy for tomcat 7.0.33. Using LDAP for user authentication. Everything works fine except one critical thing: the authenticated user ID does not get to tomcat. I see it in the Tomcat's access log: it shows "-" where the ID is supposed to be. I tried to set various header elements in nginx.conf, see below a fragment of it (I experimented with them, turning them on and off). Using tcpdump, I confirmed that all the elements that I set indeed go to the HTTP request. The same thing with Apache HTTPD works properly, but there we use AJP. What am I missing? Any other header field I need to set? Also, can anybody tell me how Tomcat retrieves the authenticated user ID from the request header? What is that field exact name? auth_ldap_url ............................ auth_ldap_binddn eciadmin at mooncapital.corp; auth_ldap_binddn_passwd .............; auth_ldap "Enter your Windows/Network Login To Access MoonWeb"; auth_ldap_require valid_user; server { listen mcny14.mooncapital.corp:8880; server_name mcny14.mooncapital.corp; location /moon/ { #proxy_pass_header Set-Cookie; #proxy_ignore_headers Expires Cache-Control; proxy_redirect off; proxy_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-User $remote_user; proxy_set_header Remote-User $remote_user; proxy_set_header User $remote_user; proxy_set_header REMOTE_USER $remote_user; proxy_set_header X-URL-SCHEME https; #proxy_set_header Authorization ""; root mdocs; proxy_pass http://mcny14:8801; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239703,239703#msg-239703 From nginx-forum at nginx.us Fri May 31 02:16:28 2013 From: nginx-forum at nginx.us (angelochen960) Date: Thu, 30 May 2013 22:16:28 -0400 Subject: using a proxied server as default_server Message-ID: <47b994a03e2df2423e5f77726c3026f3.NginxMailingListEnglish@forum.nginx.org> Hi, I have a tomcat app running behind nginx, it works, now I make it the default_server, this works if it access with server name like sample.com, but if access with an IP, http://192.168.1.1, it does not work, any idea ? Thanks, server { listen 192.168.1.1:80 default_server; server_name sample.com root /var/www/public_html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:8080; proxy_redirect off; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239707,239707#msg-239707 From aldernetwork at gmail.com Fri May 31 07:49:51 2013 From: aldernetwork at gmail.com (Alder Network) Date: Fri, 31 May 2013 00:49:51 -0700 Subject: secure websocket (wss) proxy support in nginx Message-ID: Say if I want to proxy both websocket (ws) and secure websocket traffic, would latest version of Nginx support that? What would be the conf? Thanks, - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri May 31 08:09:05 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 31 May 2013 09:09:05 +0100 Subject: using a proxied server as default_server In-Reply-To: <47b994a03e2df2423e5f77726c3026f3.NginxMailingListEnglish@forum.nginx.org> References: <47b994a03e2df2423e5f77726c3026f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130531080905.GM27406@craic.sysops.org> On Thu, May 30, 2013 at 10:16:28PM -0400, angelochen960 wrote: Hi there, > I have a tomcat app running behind nginx, it works, now I make it the > default_server, this works if it access with server name like sample.com, > but if access with an IP, http://192.168.1.1, it does not work, any idea ? What does "does not work" mean? Be specific. What do you do? (curl -i http://192.168.1.1) What do you see? (The "wrong" content? An error message? Something else?) What do you expect to see? (The "right" content?) You will probably find that it is easier for people to offer help and suggestions if you make it easy for them to do that. In this particular case, what is expected to happen? I imagine it is: browser makes request to nginx; nginx makes request to tomcat with some specific headers set; tomcat responds to nginx; nginx responds to browser Can you see what actually does happen? If you can find the first point in "what does happen" that doesn't match "what should happen", then you'll have a good hint as to where the problem is. Good luck, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri May 31 09:37:04 2013 From: nginx-forum at nginx.us (bigplum) Date: Fri, 31 May 2013 05:37:04 -0400 Subject: [MODULE] limit traffic rate for nginx In-Reply-To: <3d8d677fcbf89225ea0e4fd8ca2057ff.NginxMailingListEnglish@forum.nginx.org> References: <3d8d677fcbf89225ea0e4fd8ca2057ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <68813f6703a13123b44aeafcd0ae632d.NginxMailingListEnglish@forum.nginx.org> Hi guys, Some days ago, one of this module users complaint about the unfair speed for multi-connections. So I have some time to realize a new method to limit download rate. In the new code, if last second speed is lager than max rate, the timer is added in this module, and the body filter will return. So nothing will be sent to client, and do not use the r->limit_rate in write_filter to limit sendfile size. And clcf->sendfile_max_chunk also is modified by the maximum rate. I am not sure, is there any side-effect about this method? https://github.com/bigplum/Nginx-limit-traffic-rate-module/blob/v1.0/ngx_http_limit_traffic_rate_filter_module.c Posted at Nginx Forum: http://forum.nginx.org/read.php?2,159398,239713#msg-239713 From nginx-forum at nginx.us Fri May 31 11:30:22 2013 From: nginx-forum at nginx.us (angelochen960) Date: Fri, 31 May 2013 07:30:22 -0400 Subject: using a proxied server as default_server In-Reply-To: <20130531080905.GM27406@craic.sysops.org> References: <20130531080905.GM27406@craic.sysops.org> Message-ID: <9af8f3ff8f84096e0defde96bc2cc83f.NginxMailingListEnglish@forum.nginx.org> Hi, Sorry for not making it more specific, the issue is, the app in the tomcat is a virtual host as well, so it checks 'host' field for 'sample.com', a default_server with specific IP when accessed by a IP address like http://192.168.1.1/, it will not have a 'host', thus when passed to the app in the tomcat, it will not hit the right virtual host there, initially I was thinking, probably nginx can insert the 'host' before proxy to tomcat to make it work. however, do find a simple solution to this problem, in the default.conf, I added: localtion / { return 302 http://sample.com;} and remove the default_server from my virtual host for sample.com this meets my requirement thanks, Angelo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239707,239716#msg-239716 From francis at daoine.org Fri May 31 13:01:21 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 31 May 2013 14:01:21 +0100 Subject: using a proxied server as default_server In-Reply-To: <9af8f3ff8f84096e0defde96bc2cc83f.NginxMailingListEnglish@forum.nginx.org> References: <20130531080905.GM27406@craic.sysops.org> <9af8f3ff8f84096e0defde96bc2cc83f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130531130121.GN27406@craic.sysops.org> On Fri, May 31, 2013 at 07:30:22AM -0400, angelochen960 wrote: Hi there, > Sorry for not making it more specific, the issue is, the app in the tomcat > is a virtual host as well, so it checks 'host' field for 'sample.com', No worries. Now that you have identified the problem, the possibilities for solutions are more obvious :-) For example, as an alternative to what you have already working, you could note that you have the following line in your config: proxy_set_header Host $http_host; which sends the Host: header to the backend with the value of the $http_host variable, which is the content of the incoming Host: header (and may be empty). If your backend requires that this always be "sample.com", then you could instead use proxy_set_header Host sample.com; or possibly something like $server_name that has the same value. > default_server with specific IP when accessed by a IP address like > http://192.168.1.1/, it will not have a 'host', thus when passed to the app > in the tomcat, it will not hit the right virtual host there, initially I was > thinking, probably nginx can insert the 'host' before proxy to tomcat to > make it work. Strictly, it might have no Host, or it might have a Host of 192.168.1.1. Either way, it is not what your tomcat expects (unless it is *also* the default server for connections to this address). And getting nginx to insert the Host header you want is one way to approach it. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri May 31 14:06:52 2013 From: nginx-forum at nginx.us (angelochen960) Date: Fri, 31 May 2013 10:06:52 -0400 Subject: using a proxied server as default_server In-Reply-To: <20130531130121.GN27406@craic.sysops.org> References: <20130531130121.GN27406@craic.sysops.org> Message-ID: <61fa2e86b0c25809ce3f1af298917ed9.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I think that's a good approach, will give that a try, thanks, Angelo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239707,239726#msg-239726 From nginx-forum at nginx.us Fri May 31 17:25:15 2013 From: nginx-forum at nginx.us (natostanco) Date: Fri, 31 May 2013 13:25:15 -0400 Subject: if statement + ssl_certificate? Message-ID: Is it possible? or has it been forbidden in recent versions? Because I tried and it does not allow it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239732,239732#msg-239732 From r1ch+nginx at teamliquid.net Fri May 31 18:33:10 2013 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 31 May 2013 14:33:10 -0400 Subject: if statement + ssl_certificate? In-Reply-To: References: Message-ID: It is impossible, since the certificate has to be presented to the client before the server knows anything about the request. On Fri, May 31, 2013 at 1:25 PM, natostanco wrote: > Is it possible? or has it been forbidden in recent versions? Because I > tried > and it does not allow it. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239732,239732#msg-239732 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_ribe at elevated-dev.com Fri May 31 18:40:17 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Fri, 31 May 2013 12:40:17 -0600 Subject: problem setting up certificate Message-ID: <1CEC37B5-7064-48A6-8F7C-226F215C0BC8@elevated-dev.com> I'm getting this error after installing the certificate & key: [emerg] 809#0: SSL_CTX_use_PrivateKey_file("/paging/site/config/server.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) I've tried this twice, (started over with a new csr, thinking I had screwed up and used the wrong key or something), and am still getting the error. I've double-checked to make sure that nginx is using the config file I expect, and that it specifies the correct locations for the cert & key. How should I proceed in trouble-shooting this? -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From scott_ribe at elevated-dev.com Fri May 31 19:04:44 2013 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Fri, 31 May 2013 13:04:44 -0600 Subject: problem setting up certificate In-Reply-To: <1CEC37B5-7064-48A6-8F7C-226F215C0BC8@elevated-dev.com> References: <1CEC37B5-7064-48A6-8F7C-226F215C0BC8@elevated-dev.com> Message-ID: Meh, never mind. Supposedly same source of certificates as prior server installed within this same institution (I didn't do the prior one), but this one is in root->mine order instead of mine->root order. Edited the cert file to switch them around, and now it works. On May 31, 2013, at 12:40 PM, Scott Ribe wrote: > I'm getting this error after installing the certificate & key: > > [emerg] 809#0: SSL_CTX_use_PrivateKey_file("/paging/site/config/server.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) > > I've tried this twice, (started over with a new csr, thinking I had screwed up and used the wrong key or something), and am still getting the error. I've double-checked to make sure that nginx is using the config file I expect, and that it specifies the correct locations for the cert & key. > > How should I proceed in trouble-shooting this? > > -- > Scott Ribe > scott_ribe at elevated-dev.com > http://www.elevated-dev.com/ > (303) 722-0567 voice > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From nginx-forum at nginx.us Fri May 31 21:50:00 2013 From: nginx-forum at nginx.us (Homebrewsky) Date: Fri, 31 May 2013 17:50:00 -0400 Subject: HttpUploadModule - after upload file won't move/rename Message-ID: <2fbadc8875c8d43112fc33b79d62646b.NginxMailingListEnglish@forum.nginx.org> I'm trying to setup a very simple HTTP POST file upload server. I'm trying to POST files via curl, without worrying about forms or responses, etc. I just want to drop the file in nginx, and have it land. So far, the file uploads into the upload_store location, but it doesn't move out. I'm left with a valid file with a hashed filename, in a hashed directory. I feel like I'm missing something simple and obvious, and have got a bit crosseyed in the process. Here's my config: location /data { upload_pass /returnok; root /opt/datapush/test; upload_store /tmp/upload 1; upload_store_access user:rw group:rw all:rw; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_cleanup 400 404 499 500-505; access_log /mnt/log/nginx/datapush_access.log main; error_log /mnt/log/nginx/datapush_error.log debug; } location /returnok { return 200; } /opt/datapush/test/data exists, and is 777 (got a little frustrated at one point). Also, i have debug logging turned on, and everything looks good, but there is no mention of it trying to rename or move the file, and no error to that effect either. Come on, someone show me what I'm over looking here. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239736,239736#msg-239736 From akunz at wishmedia.de Fri May 31 22:06:05 2013 From: akunz at wishmedia.de (Alexander Kunz) Date: Sat, 01 Jun 2013 00:06:05 +0200 Subject: HttpUploadModule - after upload file won't move/rename In-Reply-To: <2fbadc8875c8d43112fc33b79d62646b.NginxMailingListEnglish@forum.nginx.org> References: <2fbadc8875c8d43112fc33b79d62646b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51A91ECD.1020803@wishmedia.de> Hello, Am 31.05.2013 23:50, schrieb Homebrewsky: > I'm trying to setup a very simple HTTP POST file upload server. I'm trying > to POST files via curl, without worrying about forms or responses, etc. I > just want to drop the file in nginx, and have it land. > > So far, the file uploads into the upload_store location, but it doesn't move > out. I'm left with a valid file with a hashed filename, in a hashed > directory. I feel like I'm missing something simple and obvious, and have > got a bit crosseyed in the process. Here's my config: > > location /data { > upload_pass /returnok; > root /opt/datapush/test; > upload_store /tmp/upload 1; > upload_store_access user:rw group:rw all:rw; > upload_set_form_field $upload_field_name.name > "$upload_file_name"; > upload_set_form_field $upload_field_name.content_type > "$upload_content_type"; > upload_set_form_field $upload_field_name.path > "$upload_tmp_path"; > > upload_cleanup 400 404 499 500-505; > > > access_log /mnt/log/nginx/datapush_access.log main; > error_log /mnt/log/nginx/datapush_error.log debug; > } location /returnok, there you can/must move the file to the final destination. Currently there happens nothing, but it keeps in the temporary directory because return code 200 is not listed in upload_cleanup... > location /returnok { > return 200; > } > Kind regards, Alexander