From daniel at malarhojden.nu Sun Sep 1 11:11:04 2013 From: daniel at malarhojden.nu (Daniel Lundqvist) Date: Sun, 1 Sep 2013 19:11:04 +0800 Subject: SSL certificate chain Message-ID: Hi, I am trying to configure nginx 1.4.1 (using OpenSSL 1.0.1e) with a PEM encoded certificate file that contains the whole chain, 3 including Root CA. But I can not get it to work. I have followed documentation at http://nginx.org/en/docs/http/configuring_https_servers.html#chains and http://www.startssl.com/?app=42, but no matter what I do it seems I can not get nginx to deliver more than one certificate. I have used both http://portecle.sourceforge.net and https://www.ssllabs.com/ssltest/ to verify. Other services (e.g. dovecot IMAP server) on the same host using same version of OpenSSL and same intermediate certificate and Root CA works works fine. How can I troubleshoot what is going wrong with nginx? Thanks in advance. -- daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4145 bytes Desc: not available URL: From steve at greengecko.co.nz Sun Sep 1 11:25:42 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 1 Sep 2013 23:25:42 +1200 Subject: SSL certificate chain In-Reply-To: References: Message-ID: <8261F70D-EFEA-4BF9-8DFE-53D927B3E48B@greengecko.co.nz> Make sure the server cert it first in the file, followed by the ca certs. Steve On 1/09/2013, at 11:11 PM, Daniel Lundqvist wrote: > Hi, > > I am trying to configure nginx 1.4.1 (using OpenSSL 1.0.1e) with a PEM encoded certificate file that contains the whole chain, 3 including Root CA. But I can not get it to work. I have followed documentation at http://nginx.org/en/docs/http/configuring_https_servers.html#chains and http://www.startssl.com/?app=42, but no matter what I do it seems I can not get nginx to deliver more than one certificate. I have used both http://portecle.sourceforge.net and https://www.ssllabs.com/ssltest/ to verify. Other services (e.g. dovecot IMAP server) on the same host using same version of OpenSSL and same intermediate certificate and Root CA works works fine. How can I troubleshoot what is going wrong with nginx? > > Thanks in advance. > -- > daniel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cite+nginx at incertum.net Sun Sep 1 11:51:27 2013 From: cite+nginx at incertum.net (Stefan Foerster) Date: Sun, 1 Sep 2013 13:51:27 +0200 Subject: "Idiomatic" Gallery3 configuration Message-ID: <20130901115127.GA28186@mail.incertum.net> Hello world, I've looked around the net for quite some time to find a suitable configuration for nginx that allows me to run Gallery3 with php-fpm. Unfortunately, the search results weren't that helpful. So I carefully read through the official documentation for "location" and "try_files", and I think I managed to get something that could serve as a basis. Since I still lack experience with nginx, I'd really appreciate any help you could give me with cleaning up that configuration. To recap, what needs to be achieved is (examples only): 1. /lib/images/logo.png -> pass through 2. /Controller?param -> /index.php?kohana_uri=Controller?param 3. /index.php/Controller?param -> /index.php?kohana_uri=Controller?param 4. /var/(albums|thumbs|resizes) -> /file_proxy/$1 (continue with #2) 5. deny access to /var/(logs|tmp|tmp) and /bin 6. deny access to .htaccess, config.inc.php and so on 7. set "Expires" headers to static content (to make YSlow happy :-) The configuration I've come up with is: # is that outer location block actually needed? location / { location ~ /(index\.php/)?(.+)$ { try_files $uri /index.php?kohana_uri=$2&$args; # is it possible/desirable to consolidate access control to # special files within one regexp (and not three?) location ~ /\.(ht|tpl(\.php?)|sql|inc\.php|db)$ { deny all; } # see previous comment location ~ /var/(uploads|tmp|logs) { deny all; } # see previous comment location ~ /bin { deny all; } location ~ /var/(albums|thumbs|resizes) { # instead of repeating "albums|thumbs..", can I use $1 here? and # will $2 still be a valid capture then? Something like # "rewrite ^/var/$1/(.*)$ /file_proxy/$2 last; perhaps?" # furthermore, is this a legitimate use of "rewrite"? rewrite ^/var/(albums|thumbs|resizes)/(.*)$ /file_proxy/$2 last; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|ttf)$ { try_files $uri /index.php?kohana_uri=$uri&$args; expires 30d; } } location = /index.php { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/vhost-3222.sock; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } } There are a couple of things I'm unsure about and a few other things that I'm unhappy with - I've outlined them with comments: 1. Is the outer "location /" block actually needed? 2. As you can see, I'm using three regexps to protect special directories. Is it desirable to consolidate those into one line? 3. The location block protecting "/var/(albums|...)" already captures a part of the URL - can I refer to "$1" in the "rewrite" clause? If so, can I still refer to "$2"? What would be the proper way to handle this? 4. From reading through a lot of threads, I get the impression that the use of "rewrite" is actually frowned upon. Is my use of "rewrite" a "legitimate" one? Furthermore, I'd like to make the configuration a bit more "generic". As of now, it is assumed that the application is actually installed in the server's root directory. Could I use a variable to store the actual installation root and refer to this within the "location" directives? I'd appreciate any and all insights you could share with me. Please don't hesitate to tell me when I need to read certain parts of the documentation again :) Cheers Stefan From daniel at malarhojden.nu Sun Sep 1 12:55:10 2013 From: daniel at malarhojden.nu (Daniel Lundqvist) Date: Sun, 1 Sep 2013 20:55:10 +0800 Subject: SSL certificate chain In-Reply-To: <8261F70D-EFEA-4BF9-8DFE-53D927B3E48B@greengecko.co.nz> References: <8261F70D-EFEA-4BF9-8DFE-53D927B3E48B@greengecko.co.nz> Message-ID: Hi, They are. I get no errors from nginx whatsoever, just that no certificate after the first is never sent. If I change order I get error about key not matching, which is to be expected. -- daniel On 1 sep 2013, at 19:25, Steve Holdoway wrote: > Make sure the server cert it first in the file, followed by the ca certs. > > Steve > > On 1/09/2013, at 11:11 PM, Daniel Lundqvist wrote: > >> Hi, >> >> I am trying to configure nginx 1.4.1 (using OpenSSL 1.0.1e) with a PEM encoded certificate file that contains the whole chain, 3 including Root CA. But I can not get it to work. I have followed documentation at http://nginx.org/en/docs/http/configuring_https_servers.html#chains and http://www.startssl.com/?app=42, but no matter what I do it seems I can not get nginx to deliver more than one certificate. I have used both http://portecle.sourceforge.net and https://www.ssllabs.com/ssltest/ to verify. Other services (e.g. dovecot IMAP server) on the same host using same version of OpenSSL and same intermediate certificate and Root CA works works fine. How can I troubleshoot what is going wrong with nginx? >> >> Thanks in advance. >> -- >> daniel >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4145 bytes Desc: not available URL: From nginx-forum at nginx.us Sun Sep 1 13:43:23 2013 From: nginx-forum at nginx.us (Sylvia) Date: Sun, 01 Sep 2013 09:43:23 -0400 Subject: SSL certificate chain In-Reply-To: References: Message-ID: Hi. You can try to run a diagnostics for problem discovery and recommendations https://www.ssllabs.com/ssltest/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242410,242417#msg-242417 From ar at xlrs.de Sun Sep 1 16:36:10 2013 From: ar at xlrs.de (Axel) Date: Sun, 01 Sep 2013 18:36:10 +0200 Subject: SSL certificate chain In-Reply-To: References: Message-ID: <1452171.reO6y3aHVy@lxrosenski.pag> Hello, what's your error? you just need to copy both certificates in one file with 'cat' or sth. similar. I use portecle to examine the chained file. Make sure that it's the right ca cert. Regards, Axel Am Sonntag, 1. September 2013, 19:11:04 schrieb Daniel Lundqvist: > Hi, > > I am trying to configure nginx 1.4.1 (using OpenSSL 1.0.1e) with a PEM > encoded certificate file that contains the whole chain, 3 including Root > CA. But I can not get it to work. I have followed documentation at > http://nginx.org/en/docs/http/configuring_https_servers.html#chains and > http://www.startssl.com/?app=42, but no matter what I do it seems I can not > get nginx to deliver more than one certificate. I have used both > http://portecle.sourceforge.net and https://www.ssllabs.com/ssltest/ to > verify. Other services (e.g. dovecot IMAP server) on the same host using > same version of OpenSSL and same intermediate certificate and Root CA works > works fine. How can I troubleshoot what is going wrong with nginx? > > Thanks in advance. From nginx-forum at nginx.us Sun Sep 1 20:58:42 2013 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 01 Sep 2013 16:58:42 -0400 Subject: Transforming nginx for Windows Message-ID: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> Working on getting real high performance with nginx under windows I am rewriting code and already got around the fd_setsize issue following http://stackoverflow.com/questions/7976388/increasing-limit-of-fd-setsize-and-select/18530636 which is documented as http://support.microsoft.com/kb/111855 I came across an interesting issue (FD_SETSIZE compiled for 8196), when worker_connections is set to 1024 I can get a max of 4500 true concurrent connections working, when worker_connections is set to 2048 I can get a max of 9000 true concurrent connections working, is there some kind of recycling of FD going on inside nginx ? if not I need to look somewhere else. I intend to also solve the worker_processes issue but I first want to find out who is recycling the FD's. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242426#msg-242426 From xsanch at gmail.com Sun Sep 1 21:29:46 2013 From: xsanch at gmail.com (Jorge Sanchez) Date: Sun, 1 Sep 2013 16:29:46 -0500 Subject: NGINX perl module to server files In-Reply-To: References: Message-ID: Hello list, here is the strace of the NGINX serving my media file (Jpg) to the client: Accepting the connection: 25003 accept4(6, {sa_family=AF_INET, sin_port=htons(53907), sin_addr=inet_addr("XXX.XXX.XXX.XX")}, [16], SOCK_NONBLOCK) = 3 Opening the file: 25003 open("/usr/site/gruppe/media/t_pics/images/thumbs/714.jpg", O_RDONLY|O_NONBLOCK|O_LARGEFILE) = 15 25003 fstat64(15, {st_mode=S_IFREG|0755, st_size=5420, ...}) = 0 Sending the file to the client with sendfile: 25003 sendfile64(3, 15, [0], 5420) = 5420 So from above it turns out that there were no headers sent and thus the status code probably defaulted to "000" on NGINX. Adding the send_http_header before the $r->sendfile() solves the issue. $r->send_http_header(); Now I have the correct HTTP status code, anyway the content type defaults to "application/octet-stream" which is configured as default content type on nginx. Well, is there a way to have NGINX correctly set the Content-Type after handling the request on perl content handler or should I make my own mapping and set the content-type myself in send_http_header ? HTTP/1.1 200 OK Server: nginx/1.5.5 Date: Sun, 01 Sep 2013 21:17:41 GMT Content-Type: application/octet-stream Transfer-Encoding: chunked Connection: keep-alive Regards, Jorge On Sat, Aug 31, 2013 at 4:46 PM, Jorge Sanchez wrote: > Hello, > > I have created perl NGINX module to server static files on the NGINX > (mainly images). For security reasons I am generating the AES:CBC encrypted > url which I am decrypting on the NGINX and serving the file via NGINX perl > module. The problem is that I am sometimes getting the bellow response with > HTTP response code set to 000: > > > XX.XX.XX.XX - - [01/Sep/2013:01:20:37 +0400] "GET > /media/u5OU/NRkImrrwH/TThHe7hns5bOEv+Aou2/VJ8YD/ts= HTTP/1.1" *000* 39078 > "http://XXXX/full/JcbyEJTb8nMh+YH0xSg1jgl4N7vWQi2xBPep7VcJmD8=" > "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 > Firefox/23.0" > > > The way how I handle the url in the perl module is : > > In case the file is found: > $r->sendfile($fileresult[0]); > $r->flush(); > return OK; > > else: > $r->status(404); > return DECLINED; > > > My question is if I am sending the files correctly or is there any other > specific value i should send back from perl (besides returning OK). > > If needed I can send the nginx.conf. > > > Thanks for your help. > > Regards, > > > Jorge > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sun Sep 1 23:14:55 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 2 Sep 2013 03:14:55 +0400 Subject: Transforming nginx for Windows In-Reply-To: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201309020314.55179.vbart@nginx.com> On Monday 02 September 2013 00:58:42 itpp2012 wrote: > Working on getting real high performance with nginx under windows I am > rewriting code and already got around the fd_setsize issue following > http://stackoverflow.com/questions/7976388/increasing-limit-of-fd-setsize-a > nd-select/18530636 which is documented as > http://support.microsoft.com/kb/111855 > > I came across an interesting issue (FD_SETSIZE compiled for 8196), when > worker_connections is set to 1024 I can get a max of 4500 true concurrent > connections working, when worker_connections is set to 2048 I can get a max > of 9000 true concurrent connections working, is there some kind of > recycling of FD going on inside nginx ? if not I need to look somewhere > else. I intend to also solve the worker_processes issue but I first want > to find out who is recycling the FD's. > Yes, there is one. See the ngx_drain_connections() function with the accompanying ngx_reusable_connection(). wbr, Valentin V. Bartenev From mdounin at mdounin.ru Mon Sep 2 01:27:48 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 2 Sep 2013 05:27:48 +0400 Subject: NGINX perl module to server files In-Reply-To: References: Message-ID: <20130902012748.GM29448@mdounin.ru> Hello! On Sun, Sep 01, 2013 at 04:29:46PM -0500, Jorge Sanchez wrote: [...] > So from above it turns out that there were no headers sent and thus the > status code probably defaulted to "000" on NGINX. > > Adding the send_http_header before the $r->sendfile() solves the issue. > $r->send_http_header(); Glad to see you've solved your problem with perl code. BTW, there are some examples at http://nginx.org/en/docs/http/ngx_http_perl_module.html which may help. > Now I have the correct HTTP status code, anyway the content type defaults > to "application/octet-stream" which is configured as default content type > on nginx. Well, is there a way to have NGINX correctly set the Content-Type > after handling the request on perl content handler or should I make my own > mapping and set the content-type myself in send_http_header ? The Content-Type nginx set by itself is based on an extension as seen in URI. As there is no extension in URIs you use, it uses default type. If a default type isn't what you want - you should either set response type explicitly, or reconsider URIs used. In your particular case, I would recommend you to use $r->internal_redirect() to an internal location instead of trying to send files yourself. (Or, alternatively, perl_set + rewrite should also work.) It should be much easier than trying to send files yourself from perl. See here for more details: http://nginx.org/en/docs/http/ngx_http_perl_module.html -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Sep 2 07:06:17 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 02 Sep 2013 03:06:17 -0400 Subject: Securing nginx: Workers per server block under specific user? In-Reply-To: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> References: <1377515672.31001.YahooMailNeo@web140502.mail.bf1.yahoo.com> Message-ID: <65bbe0e89bdc4e54122d413f72afbcd8.NginxMailingListEnglish@forum.nginx.org> how do you execute your php? if you reverse proxying to an apache you might use suphp, as usual: http://www.suphp.org/Home.html php-fpm has a similar option, as alex mentioned if you really need to define workers for each server, run an nginx-instance for each of your websites; you can define an own user for each instance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242209,242444#msg-242444 From daniel at malarhojden.nu Mon Sep 2 10:59:07 2013 From: daniel at malarhojden.nu (Daniel Lundqvist) Date: Mon, 2 Sep 2013 18:59:07 +0800 Subject: SSL certificate chain In-Reply-To: References: Message-ID: <8B05699B-0670-4E10-84E1-CB690E631B23@malarhojden.nu> I have, it just says only 1 certificate is provided. Here are the test results: https://www.ssllabs.com/ssltest/analyze.html?d=www.malarhojden.nu -- daniel On 1 sep 2013, at 21:43, Sylvia wrote: > Hi. > You can try to run a diagnostics for problem discovery and recommendations > > https://www.ssllabs.com/ssltest/ > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242410,242417#msg-242417 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4145 bytes Desc: not available URL: From lists-nginx at swsystem.co.uk Mon Sep 2 11:12:52 2013 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 02 Sep 2013 12:12:52 +0100 Subject: SSL certificate chain In-Reply-To: <8B05699B-0670-4E10-84E1-CB690E631B23@malarhojden.nu> References: <8B05699B-0670-4E10-84E1-CB690E631B23@malarhojden.nu> Message-ID: <92a72304b3b88c70929bf60007805822@swsystem.co.uk> On 2013-09-02 11:59, Daniel Lundqvist wrote: > I have, it just says only 1 certificate is provided. Here are the test > results: > https://www.ssllabs.com/ssltest/analyze.html?d=www.malarhojden.nu ... I note that you're using startcom for the certificate, I recall that the intermediate certificate they say to use isn't actually the one provided and had to complete the certificate chain myself. https://www.ssllabs.com/ssltest/analyze.html?d=www.stevewilson.co.uk To build up my pem I started with the crt and key, then running "openssl x509 -in cert.pem -noout -text" I was then able to download the correct intermediate using the "CA Issuers - URI" provided in the certificate. Appending this to the pem and retesting. Repeating the process for each certificate until it became valid. Authority Information Access: OCSP - URI:http://ocsp.startssl.com/sub/class1/server/ca CA Issuers - URI:http://aia.startssl.com/certs/sub.class1.server.ca.crt It might be worth checking if your intermediate matches the above sub.class1.server.ca.crt one. From daniel at malarhojden.nu Mon Sep 2 13:08:16 2013 From: daniel at malarhojden.nu (Daniel Lundqvist) Date: Mon, 2 Sep 2013 21:08:16 +0800 Subject: SSL certificate chain In-Reply-To: <92a72304b3b88c70929bf60007805822@swsystem.co.uk> References: <8B05699B-0670-4E10-84E1-CB690E631B23@malarhojden.nu> <92a72304b3b88c70929bf60007805822@swsystem.co.uk> Message-ID: <555E6D3A-74D5-48E1-9252-86CB6E97D995@malarhojden.nu> So ? mysteries solved. I believe. A few things was wrong for me: 1) I had a catch all virtual host using the same certificate file as main site (configured both with a "invalid" server name and default_server for both HTTP and HTTPS) 2) It seems virtual server is also selected based on CN/SubjectAltName from certificate which I did not know (is this correct? Seem so from my testing) So I changed the certificate on catch all virtual server to self signed and now everything seems to be ok. Sorry for taking up your time with my misconfigured server. At least I learned something :) -- daniel On 2 sep 2013, at 19:12, Steve Wilson wrote: > On 2013-09-02 11:59, Daniel Lundqvist wrote: >> I have, it just says only 1 certificate is provided. Here are the test >> results: >> https://www.ssllabs.com/ssltest/analyze.html?d=www.malarhojden.nu > ... > > I note that you're using startcom for the certificate, I recall that the intermediate certificate they say to use isn't actually the one provided and had to complete the certificate chain myself. > > https://www.ssllabs.com/ssltest/analyze.html?d=www.stevewilson.co.uk > > To build up my pem I started with the crt and key, then running "openssl x509 -in cert.pem -noout -text" I was then able to download the correct intermediate using the "CA Issuers - URI" provided in the certificate. Appending this to the pem and retesting. Repeating the process for each certificate until it became valid. > > Authority Information Access: > OCSP - URI:http://ocsp.startssl.com/sub/class1/server/ca > CA Issuers - URI:http://aia.startssl.com/certs/sub.class1.server.ca.crt > > It might be worth checking if your intermediate matches the above sub.class1.server.ca.crt one. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4145 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Sep 2 17:26:47 2013 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 02 Sep 2013 13:26:47 -0400 Subject: Transforming nginx for Windows In-Reply-To: <201309020314.55179.vbart@nginx.com> References: <201309020314.55179.vbart@nginx.com> Message-ID: <2368f0fcbc55c77ba27d61b5fb2155b6.NginxMailingListEnglish@forum.nginx.org> Found them, tnx, no adjustment needed here, it's dealing with the much larger FD table without problems. Got up to 12k concurrent connections today one worker one cpu at around 40% utilization, can't get beyond that yet due to the test tool not being able to go beyond 12k :) If anyone wants to test as well let me know, I can place the binary (based on 1.4.2) somewhere. Valentin, can you explain what the problem is with multiple workers and which source files are involved? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242468#msg-242468 From nginx-forum at nginx.us Tue Sep 3 05:22:36 2013 From: nginx-forum at nginx.us (rmombassa) Date: Tue, 03 Sep 2013 01:22:36 -0400 Subject: mail proxy to 3rd party using ssl Message-ID: <1faa28d2cf5e5c01bdc173b92caa3300.NginxMailingListEnglish@forum.nginx.org> I am setting up nginx as POP3 mail proxy to two 3rd party mail servers. Different domains, one of them uses SSL. Since I do not have that 3rd party's SSL certificate I use my own company certificate in nginx. That cert is properly signed but obviousy belongs to another domain (our domain). If I connect to the non-ssl server through nginx all works fine (port 110 on nginx and 3rd party server). If I connect to the ssl domain through nginx (port 995 on nginx and 3rd party server) I seem to not get a response from the 3rd party server. The authentication routine on connection establishment is properly called by nginx (correct uname/pw) and it returns that the user is OK (correct 3rd party IP address is returned as well). Using the email client without proxy works fine, meaning: uname/pw are correct. Questions: - Is such configuration possible at all (ssl to 3rd party server without having that server's certificate installed on nginx)? - Is nginx in this configurtion a man-in-the middle? Could that be a problem? - Any idea how to further debug? Thanks, Rick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242473,242473#msg-242473 From agentzh at gmail.com Tue Sep 3 07:37:27 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 3 Sep 2013 00:37:27 -0700 Subject: [ANN] ngx_openresty devel version 1.4.2.3 released Message-ID: Hello folks! I am glad to announce that the new development version of ngx_openresty, 1.4.2.3, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.4.2.1: * upgraded LuaNginxModule to 0.8.7. * feature: log_by_lua* now always runs before the standard ngx_http_log_module (for access logging). thanks Calin Don for the suggestion. * feature: added new API ngx.config.debug to indicate whether this is a debug build of Nginx (that is, being built by the "./configure" option "--with-debug"). * bugfix: the global Lua state's "_G" table was cleared when lua_code_cache was off, which could confuse the setup in init_by_lua*. thanks Robert Andrew Ditthardt for the report. * bugfix: ngx.flush() triggered response header sending when the header was not sent yet. now it just returned the error string "nothing to flush" for this case. thanks linbo liao for the report. * bugfix: when a Lua line comment was used in the last line of the inlined Lua code chunk, a bogus Lua syntax error would be thrown. * bugfix: ngx.exit(204) could try to send the response header twice. Nginx 1.5.4 caught this issue. * bugfix: the error message for failures in loading inlined Lua code was misleading. * upgraded EchoNginxModule to 0.47. * bugfix: use of C global variables at configuration time could lead to issues when HUP reload failed in the middle. * bugfix: we might send the response header twice when an error happens. this issue is exposed by Nginx 1.5.4. thanks Markus Linnala for the report. * upgraded DrizzleNginxModule to 0.1.6. * bugfix: compilation error happened with nginx 1.5.3+ because Nginx changes the "ngx_sock_ntop" API. * docs: typo fixes from smallfish. * upgraded MemcNginxModule to 0.13. * bugfix: fixed compatibility issues with the new upstream C API in Nginx 1.5.3+. thanks Markus Linnala for the patch. * bugfix: use of C global variables at configuration time could cause issues when HUP reload failed in the middle. * docs: now we recommend LuaRestyMemcachedLibrary instead when being used with LuaNginxModule. * applied the unix_socket_accept_over_read patch the Nginx core to fix a memory over-read issue when Nginx was accepting a unix domain socket. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From mdounin at mdounin.ru Tue Sep 3 12:01:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2013 16:01:41 +0400 Subject: mail proxy to 3rd party using ssl In-Reply-To: <1faa28d2cf5e5c01bdc173b92caa3300.NginxMailingListEnglish@forum.nginx.org> References: <1faa28d2cf5e5c01bdc173b92caa3300.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130903120140.GI65634@mdounin.ru> Hello! On Tue, Sep 03, 2013 at 01:22:36AM -0400, rmombassa wrote: > I am setting up nginx as POP3 mail proxy to two 3rd party mail servers. > Different domains, one of them uses SSL. > > Since I do not have that 3rd party's SSL certificate I use my own company > certificate in nginx. That cert is properly signed but obviousy belongs to > another domain (our domain). > > If I connect to the non-ssl server through nginx all works fine (port 110 on > nginx and 3rd party server). > > If I connect to the ssl domain through nginx (port 995 on nginx and 3rd > party server) I seem to not get a response from the 3rd party server. The > authentication routine on connection establishment is properly called by > nginx (correct uname/pw) and it returns that the user is OK (correct 3rd > party IP address is returned as well). > > Using the email client without proxy works fine, meaning: uname/pw are > correct. > > Questions: > - Is such configuration possible at all (ssl to 3rd party server without > having that server's certificate installed on nginx)? > - Is nginx in this configurtion a man-in-the middle? Could that be a > problem? > - Any idea how to further debug? The main problem you are facing right now is that nginx doesn't support SSL mail backends. And as far as I understand the description of what you are trying to do, it's a MITM, and it's not going to work unless you control clients and can convince them to accept your certificate. But it's likely not a problem as you already have everything working with non-ssl backends. -- Maxim Dounin http://nginx.org/en/donation.html From rkearsley at blueyonder.co.uk Tue Sep 3 12:42:14 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Tue, 03 Sep 2013 13:42:14 +0100 Subject: upstream keepalive debugging Message-ID: <5225D926.9060508@blueyonder.co.uk> Hi I seem to have an issue where the upstream keepalives aren't being re-used proxy_http_version 1.1; upstream dev1 { server 10.0.0.11 max_fails=0; keepalive 1024; } location / { proxy_pass http://dev1; proxy_set_header Connection ""; } On a separate server I run 'ab -n 500 -c 500 http://10.0.0.10/test/blah.txt' ... a few times waiting say 10 seconds between runs on the main server I can do netstat between runs, here's the results: root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 2 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 260 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 758 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 950 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 1308 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 1748 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 1992 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 2316 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 2767 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 3063 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 3392 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 3491 root at dev0:/root # netstat -n | grep "10.0.0.11.80" | grep "ESTAB" | wc -l 3787 It shouldn't ever need more than 500 connections to the upstream, but it keeps making more? and doesn't stick to the 1024 limit.... What's going on? nginx/1.4.1 / FreeBSD 9.1 From nginx-forum at nginx.us Tue Sep 3 12:52:09 2013 From: nginx-forum at nginx.us (ixos) Date: Tue, 03 Sep 2013 08:52:09 -0400 Subject: Return file when it's in cache/check if file exists in cache Message-ID: <9a2ab6ec02a3dfcc03969702947e9d74.NginxMailingListEnglish@forum.nginx.org> I'm using ngnix as proxy server. There is a situation when backend server is down and I know about it. Thus I'm adding 'Check-Cache' header to request and what I want to do is get the file when it is in cache, and when it is not just return error page. I don't want to pass the request to the backend. Scenario $ curl -s -o /dev/null -w "%{http_code}" 0/images/2.jpg -H'Host: example.com' 200 $ pkill -9 backend_server $ curl -s -o /dev/null -w "%{http_code}" 0/images/2.jpg -H'Host: example.com' -H'Check-Cache: true' 404 <-- I want to 200 here nginx.conf server { listen 80; underscores_in_headers on; proxy_cache_purge on; location / { error_page 599 = @jail; recursive_error_pages on; if ($http_check_cache = "true") { return 599; } proxy_pass http://localhost:8080; proxy_set_header Host $host; proxy_cache my-cache; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; proxy_cache_key $uri$is_args$args; } location @jail { # try_files ?? # proxy_cache_key (If i can get it i can use try_files) # what other solution... return 404; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242488,242488#msg-242488 From rkearsley at blueyonder.co.uk Tue Sep 3 13:19:32 2013 From: rkearsley at blueyonder.co.uk (Richard Kearsley) Date: Tue, 03 Sep 2013 14:19:32 +0100 Subject: upstream keepalive debugging In-Reply-To: <5225D926.9060508@blueyonder.co.uk> References: <5225D926.9060508@blueyonder.co.uk> Message-ID: <5225E1E4.40908@blueyonder.co.uk> Ah, let me guess - is the keepalive number "per worker"? On 03/09/13 13:42, Richard Kearsley wrote: > Hi > I seem to have an issue where the upstream keepalives aren't being > re-used > > It shouldn't ever need more than 500 connections to the upstream, but > it keeps making more? and doesn't stick to the 1024 limit.... > What's going on? > From mdounin at mdounin.ru Tue Sep 3 13:24:45 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2013 17:24:45 +0400 Subject: upstream keepalive debugging In-Reply-To: <5225E1E4.40908@blueyonder.co.uk> References: <5225D926.9060508@blueyonder.co.uk> <5225E1E4.40908@blueyonder.co.uk> Message-ID: <20130903132445.GL65634@mdounin.ru> Hello! On Tue, Sep 03, 2013 at 02:19:32PM +0100, Richard Kearsley wrote: > Ah, let me guess - is the keepalive number "per worker"? Sure, and it's what documentation explicitly states, see http://nginx.org/r/keepalive: : The connections parameter sets the maximum number of idle : keepalive connections to upstream servers that are preserved in : the cache of each worker process. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Sep 3 14:39:49 2013 From: nginx-forum at nginx.us (bkosborne) Date: Tue, 03 Sep 2013 10:39:49 -0400 Subject: proxy buffering for media files? Message-ID: I'm working on a configuration for an nginx proxy that splits requests between two upstream servers. The main reason I'm using a proxy is for SSL termination and for redundancy between the two upstream servers. Each upstream server is just a simple nginx server with identical media files stored on each. The largest media file requested is around 2.5 megabytes. There are files larger than that, but they are requested in byte-ranges from our CDN. I'm wondering how I should configure proxy buffering here. I noticed that the default is set to on: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering. But, I'm not sure the default values for the buffer sizes and whatnot are ideal. I've had proxy_buffering turned on for quite a while now, and have noticed that the proxy_temp directory has over 1 GB of data in it. To my understanding, this folder is used when the in-memory buffers cannot hold all of the data from the upstream, so it is written to disk. 1. If I set proxy_buffering to off, does that mean that the proxy streams the data directly to the client without buffering anything? Essentially, that would mean that an nginx worker would be "busy" on both the upstream and proxy server for the entire duration of the request, correct? 2. If I keep it on, does it make sense to change the buffer sizes so that the entire response from the upstream can fit into memory? I assume that would speed up the responses so that nothing is written to disk (slow). From my novice perspective, it seems counter intuitive to essentially read a file from upstream disk, write it to proxy disk, and then read it from proxy disk again. What is a common use case for using proxy_buffering? Since it's a default option, I assume it's commonly used and for good reason. I'm just having a hard time applying the thought process to my specific setup. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242495,242495#msg-242495 From nginx-forum at nginx.us Tue Sep 3 14:44:34 2013 From: nginx-forum at nginx.us (bkosborne) Date: Tue, 03 Sep 2013 10:44:34 -0400 Subject: proxy buffering for media files? In-Reply-To: References: Message-ID: <9a7e5c7a1832bd82727bc1a8bb1bad09.NginxMailingListEnglish@forum.nginx.org> One thing I thought of is that proxy_buffering is ideal if you have slow clients - where downloading the media files could take a long time. In this case, the goal would be to free up upstream workers. However, since my upstream is NOT an application server, and just nginx, is that really necessary? Only thing I can think of there is that it could be bad to keep all those "slow" connections open when reading the response from disk. If there are 100 clients connection for different media files, and they are all downloading very slow, maybe it would be a negative performance impact on the storage servers to be reading all that at once. But with proxy_buffering turned on, I assume that the entire response is read from disk RIGHT AWAY, and then stored in the buffer on the proxy. But if the proxy is just writing that response back to disk, doesn't really matter much does it Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242495,242498#msg-242498 From mdounin at mdounin.ru Tue Sep 3 14:57:44 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2013 18:57:44 +0400 Subject: proxy buffering for media files? In-Reply-To: References: Message-ID: <20130903145744.GO65634@mdounin.ru> Hello! On Tue, Sep 03, 2013 at 10:39:49AM -0400, bkosborne wrote: > I'm working on a configuration for an nginx proxy that splits requests > between two upstream servers. The main reason I'm using a proxy is for SSL > termination and for redundancy between the two upstream servers. Each > upstream server is just a simple nginx server with identical media files > stored on each. > > The largest media file requested is around 2.5 megabytes. There are files > larger than that, but they are requested in byte-ranges from our CDN. > > I'm wondering how I should configure proxy buffering here. I noticed that > the default is set to on: > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering. > But, I'm not sure the default values for the buffer sizes and whatnot are > ideal. I've had proxy_buffering turned on for quite a while now, and have > noticed that the proxy_temp directory has over 1 GB of data in it. To my > understanding, this folder is used when the in-memory buffers cannot hold > all of the data from the upstream, so it is written to disk. > > 1. If I set proxy_buffering to off, does that mean that the proxy streams > the data directly to the client without buffering anything? Essentially, > that would mean that an nginx worker would be "busy" on both the upstream > and proxy server for the entire duration of the request, correct? > > 2. If I keep it on, does it make sense to change the buffer sizes so that > the entire response from the upstream can fit into memory? I assume that > would speed up the responses so that nothing is written to disk (slow). From > my novice perspective, it seems counter intuitive to essentially read a file > from upstream disk, write it to proxy disk, and then read it from proxy disk > again. > > What is a common use case for using proxy_buffering? Since it's a default > option, I assume it's commonly used and for good reason. I'm just having a > hard time applying the thought process to my specific setup. As long as your backend servers aren't limited on the number of connections they can handle, best aproach would be to keep proxy_buffering switched on, but switch off disk buffering using proxy_max_temp_file_size. See here for details: http://nginx.org/r/proxy_max_temp_file_size -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Sep 3 16:32:35 2013 From: nginx-forum at nginx.us (lalit.jss) Date: Tue, 03 Sep 2013 12:32:35 -0400 Subject: Nginx + Circus + chaussette Message-ID: <982a2a145783c6bb92d3bc0e3db4f6f4.NginxMailingListEnglish@forum.nginx.org> Hello I am using Nginx with Circus. Circus is used with chaussette Very rarely I am seeing 504 error Nginx error log says upstream timed out (110: Connection timed out) while reading response header from upstream, client: A.B.C.D, server: servername.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "servername.com" The process listening on port 8080 are circusd and chaussettes. What could be the cause Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242502,242502#msg-242502 From artemrts at ukr.net Wed Sep 4 04:12:22 2013 From: artemrts at ukr.net (wishmaster) Date: Wed, 04 Sep 2013 07:12:22 +0300 Subject: proxy buffering for media files? In-Reply-To: <20130903145744.GO65634@mdounin.ru> References: <20130903145744.GO65634@mdounin.ru> Message-ID: <1378267875.224444326.6gdv7qhh@zebra-x17.ukr.net> --- Original message --- From: "Maxim Dounin" Date: 3 September 2013, 17:58:00 > Hello! > > On Tue, Sep 03, 2013 at 10:39:49AM -0400, bkosborne wrote: > > > I'm working on a configuration for an nginx proxy that splits requests > > between two upstream servers. The main reason I'm using a proxy is for SSL > > termination and for redundancy between the two upstream servers. Each > > upstream server is just a simple nginx server with identical media files > > stored on each. > > > > The largest media file requested is around 2.5 megabytes. There are files > > larger than that, but they are requested in byte-ranges from our CDN. > > > > I'm wondering how I should configure proxy buffering here. I noticed that > > the default is set to on: > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering. > > But, I'm not sure the default values for the buffer sizes and whatnot are > > ideal. I've had proxy_buffering turned on for quite a while now, and have > > noticed that the proxy_temp directory has over 1 GB of data in it. To my > > understanding, this folder is used when the in-memory buffers cannot hold > > all of the data from the upstream, so it is written to disk. > > > > 1. If I set proxy_buffering to off, does that mean that the proxy streams > > the data directly to the client without buffering anything? Essentially, > > that would mean that an nginx worker would be "busy" on both the upstream > > and proxy server for the entire duration of the request, correct? > > > > 2. If I keep it on, does it make sense to change the buffer sizes so that > > the entire response from the upstream can fit into memory? I assume that > > would speed up the responses so that nothing is written to disk (slow). From > > my novice perspective, it seems counter intuitive to essentially read a file > > from upstream disk, write it to proxy disk, and then read it from proxy disk > > again. > > > > What is a common use case for using proxy_buffering? Since it's a default > > option, I assume it's commonly used and for good reason. I'm just having a > > hard time applying the thought process to my specific setup. > > As long as your backend servers aren't limited on the number of > connections they can handle, best aproach would be to keep > proxy_buffering switched on, but switch off disk buffering using > proxy_max_temp_file_size. > Could you explain why this approach is not suitable for case when backend servers are limited on number of connections. Thanks, v From nginx-forum at nginx.us Wed Sep 4 07:55:12 2013 From: nginx-forum at nginx.us (anon69) Date: Wed, 04 Sep 2013 03:55:12 -0400 Subject: s-maxage header Message-ID: Hi, What's the easiest way to set the s-maxage header correctly? For example to set s-maxage to 365 days inside a location statement that matches images, JS and CSS files. This is needed so that the proxy will cache the file but the client won't. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242514,242514#msg-242514 From mdounin at mdounin.ru Wed Sep 4 10:45:08 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Sep 2013 14:45:08 +0400 Subject: proxy buffering for media files? In-Reply-To: <1378267875.224444326.6gdv7qhh@zebra-x17.ukr.net> References: <20130903145744.GO65634@mdounin.ru> <1378267875.224444326.6gdv7qhh@zebra-x17.ukr.net> Message-ID: <20130904104508.GZ65634@mdounin.ru> Hello! On Wed, Sep 04, 2013 at 07:12:22AM +0300, wishmaster wrote: [...] > > > What is a common use case for using proxy_buffering? Since it's a default > > > option, I assume it's commonly used and for good reason. I'm just having a > > > hard time applying the thought process to my specific setup. > > > > As long as your backend servers aren't limited on the number of > > connections they can handle, best aproach would be to keep > > proxy_buffering switched on, but switch off disk buffering using > > proxy_max_temp_file_size. > > > Could you explain why this approach is not suitable for case > when backend servers are limited on number of connections. If backends are connection-bound, in many cases it's more effective to buffer responses to disk instead of keeping backend connections busy. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Sep 4 14:45:12 2013 From: nginx-forum at nginx.us (bkosborne) Date: Wed, 04 Sep 2013 10:45:12 -0400 Subject: proxy buffering for media files? In-Reply-To: <20130903145744.GO65634@mdounin.ru> References: <20130903145744.GO65634@mdounin.ru> Message-ID: <2973298cffe7c328f83b6be832868721.NginxMailingListEnglish@forum.nginx.org> Hmm okay, so that would essentially buffer as much as it can in RAM (which really wouldn't be much based on the default buffer sizes). Once that in memory buffer becomes full, then what happens? It starts sending the data to the client thats in the buffer as well any anything that isn't? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242495,242526#msg-242526 From mdounin at mdounin.ru Wed Sep 4 15:19:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Sep 2013 19:19:01 +0400 Subject: proxy buffering for media files? In-Reply-To: <2973298cffe7c328f83b6be832868721.NginxMailingListEnglish@forum.nginx.org> References: <20130903145744.GO65634@mdounin.ru> <2973298cffe7c328f83b6be832868721.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130904151901.GE65634@mdounin.ru> Hello! On Wed, Sep 04, 2013 at 10:45:12AM -0400, bkosborne wrote: > Hmm okay, so that would essentially buffer as much as it can in RAM (which > really wouldn't be much based on the default buffer sizes). Once that in > memory buffer becomes full, then what happens? It starts sending the data to > the client thats in the buffer as well any anything that isn't? When all buffers are full, nginx will stop reading data from the upstream server till some buffers are sent to the client. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Sep 4 15:40:27 2013 From: nginx-forum at nginx.us (bkosborne) Date: Wed, 04 Sep 2013 11:40:27 -0400 Subject: proxy buffering for media files? In-Reply-To: <20130904151901.GE65634@mdounin.ru> References: <20130904151901.GE65634@mdounin.ru> Message-ID: Why not just turn off buffering completely? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242495,242528#msg-242528 From mdounin at mdounin.ru Wed Sep 4 16:40:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 Sep 2013 20:40:59 +0400 Subject: proxy buffering for media files? In-Reply-To: References: <20130904151901.GE65634@mdounin.ru> Message-ID: <20130904164059.GF65634@mdounin.ru> Hello! On Wed, Sep 04, 2013 at 11:40:27AM -0400, bkosborne wrote: > Why not just turn off buffering completely? There are at least three reasons: 1) Turning off buffering will result in more CPU usage (and worse network utilization in some cases). 2) It doesn't work with limit_rate (not even talking about proxy_cache which implies disk buffering). 3) Even small memory buffering saves some backend connections, and you can tune number/size of buffers used based on the available memory. General recommendation is to avoid switching off proxy_buffering unless your application really needs it, e.g. it does some form of HTTP low-bandwidth streaming and needs nginx to send data to a client immediately. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Wed Sep 4 19:30:47 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 04 Sep 2013 15:30:47 -0400 Subject: Transforming nginx for Windows In-Reply-To: <2368f0fcbc55c77ba27d61b5fb2155b6.NginxMailingListEnglish@forum.nginx.org> References: <201309020314.55179.vbart@nginx.com> <2368f0fcbc55c77ba27d61b5fb2155b6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <80701b53b3b2dd2453113cff08b409f2.NginxMailingListEnglish@forum.nginx.org> If you want to test along pushing the max concurrent limit here's my experimental version: nginx 1.4.2 experimental b01.zip http://www.sendspace.com/file/zc4ak8 MD5: 812ea5e77b39a11468291d9cb9b87503 SHA1: a2fb9e89fb272a3b3ee6162667f88e662c591ba6 Got to 20k concurrent connections today, anyone know of a test tool that can go beyond 20k concurrent? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242545#msg-242545 From nginx-forum at nginx.us Wed Sep 4 22:54:06 2013 From: nginx-forum at nginx.us (etrader) Date: Wed, 04 Sep 2013 18:54:06 -0400 Subject: a regex for rewrite Message-ID: <54c6459af5314e0d05abe8f64bc83c4b.NginxMailingListEnglish@forum.nginx.org> I have a set of rewrites as rewrite ^/(.*)/(.*)/(.*)\.(.*) /script.php?a=$1&b=$2&c=$3&ext=$4 last; rewrite ^/(.*)/(.*)\.(.*) /script.php?a=$1&b=$2&ext=$3 last; rewrite ^/(.*)\.(.*) /script.php?a=$1&ext=$2 last; rewrite ^/(.*)/(.*)/(.*)/(.*) /script.php?a=$1&b=$2&c=$3&d=$4 last; rewrite ^/(.*)/(.*)/(.*) /script.php?a=$1&b=$2&c=$3 last; rewrite ^/(.*)/(.*) /script.php?a=$1&b=$2 last; How can I use one single rewrite rule to match all possible choices? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242547,242547#msg-242547 From francis at daoine.org Thu Sep 5 00:05:17 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Sep 2013 01:05:17 +0100 Subject: a regex for rewrite In-Reply-To: <54c6459af5314e0d05abe8f64bc83c4b.NginxMailingListEnglish@forum.nginx.org> References: <54c6459af5314e0d05abe8f64bc83c4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130905000517.GB19345@craic.sysops.org> On Wed, Sep 04, 2013 at 06:54:06PM -0400, etrader wrote: Hi there, > I have a set of rewrites as > > rewrite ^/(.*)/(.*)/(.*)\.(.*) /script.php?a=$1&b=$2&c=$3&ext=$4 last; > rewrite ^/(.*)/(.*)\.(.*) /script.php?a=$1&b=$2&ext=$3 last; > rewrite ^/(.*)\.(.*) /script.php?a=$1&ext=$2 last; > rewrite ^/(.*)/(.*)/(.*)/(.*) /script.php?a=$1&b=$2&c=$3&d=$4 last; > rewrite ^/(.*)/(.*)/(.*) /script.php?a=$1&b=$2&c=$3 last; > rewrite ^/(.*)/(.*) /script.php?a=$1&b=$2 last; > > How can I use one single rewrite rule to match all possible choices? If you really want to do that, I'd suggest using a programming language to do complicated programming, and using nginx.conf for simpler things. location ~ ^/.*[/.] { rewrite ^ /script.php?E=READ_THE_REQUEST last; } and then change script.php to process $_SERVER[REQUEST_URI] when it gets the special value E=READ_THE_REQUEST, so that it populates a, b, c, d, and ext as is appropriate. In fact, I suspect that I'd not use rewrite at all: either fastcgi_pass directly in this location; or proxy_pass sending the original $uri as an argument. (There are cases where this "location" is not a drop-in replacement for the initial "rewrite"s. If your system is one of those, you'll need a different plan.) But if your rewrite system works and is clear enough for you, you probably don't need to change it. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Sep 5 00:11:10 2013 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Sep 2013 01:11:10 +0100 Subject: a regex for rewrite In-Reply-To: <20130905000517.GB19345@craic.sysops.org> References: <54c6459af5314e0d05abe8f64bc83c4b.NginxMailingListEnglish@forum.nginx.org> <20130905000517.GB19345@craic.sysops.org> Message-ID: <20130905001110.GC19345@craic.sysops.org> On Thu, Sep 05, 2013 at 01:05:17AM +0100, Francis Daly wrote: Hmm... > location ~ ^/.*[/.] { That's probably more briefly written as location ~ .[/.] { f -- Francis Daly francis at daoine.org From lists at ruby-forum.com Thu Sep 5 05:27:40 2013 From: lists at ruby-forum.com (shubham s.) Date: Thu, 05 Sep 2013 07:27:40 +0200 Subject: Nginx : Reverse Proxy + Redis Message-ID: <9237b389232be36d04fb3ee82e714386@ruby-forum.com> Hi, I have an existing Nginx setup up and running to get an reverse proxy working with caching support(Nginx InMemory). I have integrated it with LUA to get some customizations for changing request as per some biz rules. The system is used for serving XML out the reverse proxy being a SOA server. Now what I want is to have the below setup working with Redis Cache. ( Reason of using Redis is to keep the response broked up in way that the next requests can use exploit even partial data from earlier responses. I would merge the responses back to send the final xml. ) Request -> Check Redis Cache(Not Found) -> Go_To_Actual_Upstream Server -> Save_Response_Back_To_Redis(asynch) -> Send the result back to client Was wondering how to move ahead with the approach/design. Also from an implementation point too. I am a kind of newbie here. Regards, Shubham -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Sep 5 07:45:51 2013 From: lists at ruby-forum.com (shubham s.) Date: Thu, 05 Sep 2013 09:45:51 +0200 Subject: Location_Capture_By_Lua and Reverse Proxy Cache Message-ID: <4408906a996ccf92289c3f8c69e0ede1@ruby-forum.com> Hi, Was trying to get hold of the response recieved through upstream server or through nginx cache . However found that thier's no cache hit when doing this way . for brevity just posting the relevant code : looks like for something out there through which cache is ignored : also the headers out of Passenger_backed are not returned through the conetnt_by_lua stuff. location / { content_by_lua ' local res = ngx.location.capture( "/passenger_backend", { method = ngx.HTTP_POST, body = ngx.var.request_body}) ngx.header["X-My-Header"] = "blah blah" ngx.log(ngx.CRIT,"Inside Content : " , "hello") ngx.say(res.body) '; } ====== location /passenger_backend { proxy_pass http://web_units; proxy_cache small; proxy_cache_methods POST; proxy_cache_key "$request_uri|$request_body"; client_max_body_size 500M; open_file_cache max=200000 inactive=50s; open_file_cache_valid 50s; open_file_cache_min_uses 2; open_file_cache_errors on; ################# Gzip Settings ################ gzip on; gzip_comp_level 6; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; #gzip_proxied any; #gzip_static on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js; gzip_disable "MSIE [1-6]\."; ################################################### proxy_buffering on; proxy_buffers 8 32M; proxy_buffer_size 64k; proxy_cache_valid 200 302 72h; # proxy_cache_valid 404 10m; fastcgi_read_timeout 120s; fastcgi_send_timeout 120s; proxy_read_timeout 120s; proxy_connect_timeout 120s; proxy_cache_use_stale updating; add_header X-Cached $upstream_cache_status; set_real_ip_from 10.86.102.0/24; real_ip_header X-Forwarded-For; proxy_ignore_headers Set-Cookie; proxy_ignore_headers Cache-Control; } -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Sep 5 13:09:06 2013 From: nginx-forum at nginx.us (Falk) Date: Thu, 05 Sep 2013 09:09:06 -0400 Subject: proxy_ignore_headers Cache-Control + Set-Cookie do not work both Message-ID: Hi, in a reverse-proxy setup I want to ignore "Cache-Control:" and "Set-Cookie:" for .css and some more. Each one works perfectly. Pages sent with a cookie are being cached: location ~* \.(css|ico|js) { proxy_pass http://Upstream-server; proxy_ignore_headers Set-Cookie; } Pages sent with a cookie are being cached (just for reference): location ~* \.(css|ico|js) { proxy_pass http://Upstream-server; proxy_ignore_headers Set-Cookie Expires; } Pages sent with a cookie are NOT being cached: location ~* \.(css|ico|js) { proxy_pass http://Upstream-server; proxy_ignore_headers Set-Cookie Cache-Control; } nginx version: nginx/1.1.19 Ubuntu 12.04 LTS. What am I doing wrong? Falk Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242558,242558#msg-242558 From nginx-forum at nginx.us Thu Sep 5 15:20:20 2013 From: nginx-forum at nginx.us (etrader) Date: Thu, 05 Sep 2013 11:20:20 -0400 Subject: a regex for rewrite In-Reply-To: <20130905001110.GC19345@craic.sysops.org> References: <20130905001110.GC19345@craic.sysops.org> Message-ID: Thanks for very subtle suggestion. I do agree with you and follow this strategy (to use programming language for processing the url arguments). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242547,242565#msg-242565 From mdounin at mdounin.ru Thu Sep 5 17:33:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Sep 2013 21:33:21 +0400 Subject: proxy_ignore_headers Cache-Control + Set-Cookie do not work both In-Reply-To: References: Message-ID: <20130905173321.GQ65634@mdounin.ru> Hello! On Thu, Sep 05, 2013 at 09:09:06AM -0400, Falk wrote: > Hi, > in a reverse-proxy setup I want to ignore "Cache-Control:" and "Set-Cookie:" > for .css and some more. > > Each one works perfectly. > > Pages sent with a cookie are being cached: > location ~* \.(css|ico|js) { > proxy_pass http://Upstream-server; > proxy_ignore_headers Set-Cookie; } > > Pages sent with a cookie are being cached (just for reference): > location ~* \.(css|ico|js) { > proxy_pass http://Upstream-server; > proxy_ignore_headers Set-Cookie Expires; } > > Pages sent with a cookie are NOT being cached: > location ~* \.(css|ico|js) { > proxy_pass http://Upstream-server; > proxy_ignore_headers Set-Cookie Cache-Control; } > > nginx version: nginx/1.1.19 > Ubuntu 12.04 LTS. > > What am I doing wrong? Ignoring the Cache-Control header likely results in no cache time information available. You have to set proxy_cache_valid then for a cache to work, see http://nginx.org/r/proxy_cache_valid. -- Maxim Dounin http://nginx.org/en/donation.html From agentzh at gmail.com Thu Sep 5 19:28:48 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 5 Sep 2013 12:28:48 -0700 Subject: Location_Capture_By_Lua and Reverse Proxy Cache In-Reply-To: <4408906a996ccf92289c3f8c69e0ede1@ruby-forum.com> References: <4408906a996ccf92289c3f8c69e0ede1@ruby-forum.com> Message-ID: Hello! On Thu, Sep 5, 2013 at 12:45 AM, shubham s. wrote: > Was trying to get hold of the response recieved through upstream server > or through nginx cache . However found that thier's no cache hit when > doing this way . > > for brevity just posting the relevant code : looks like for something > out there through which cache is ignored : also the headers out of > Passenger_backed are not returned through the conetnt_by_lua stuff. > Please check out my (detailed) reply on the openresty-en mailing list: https://groups.google.com/d/msg/openresty-en/MbegSFArHqg/Hs_aMT3YmdsJ I won't repeat it here :) Regards, -agentzh From nginx-forum at nginx.us Thu Sep 5 21:34:51 2013 From: nginx-forum at nginx.us (bomwfabk) Date: Thu, 05 Sep 2013 17:34:51 -0400 Subject: GY Ray Ban Zonnebril mensen moeten worden bezorgd over de kenmerken en de prijs of het wo. Message-ID: De voorraad raakte een intraday high van $ 63,58 voor het einde van de dag 0,80% hoger op $ 63,27. Signet aandelen hebben inmiddels bijna 1,30% deze week. Simpelweg omdat het meisje [url=http://www.dutchprowrestling.nl/idealcheckout/page.php]Ray Ban Zonnebril[/url] goede vriend misschien ondernemen. Maar het is niet waar dat alle verpakkers en verhuizers bedrijven bieden kwalitatief hoogwaardige diensten tegen een voordelig tarief. Dus in om de beste verhuizers [url=http://www.aves-internet.nl/projecten/kidsclub/master.html]Air Max[/url] bedrijf tegen een betaalbaar tarief te huren [url=http://www.dorpsraadschaarsbergen.nl/krant/wmo/content.php]Hollister Maastricht[/url] is het goed om wat onderzoek te doen en het verzamelen van informatie over de historische achtergrond van dat bedrijf. Tijdens het maken van het onderzoek een aantal van de volgende dingen zijn het meest belangrijk om te weten: Dont huren voor een bedrijf met de hulp van de makelaars, ontmoeten persoonlijk met een aantal van de verpakkers en verhuizers bedrijf en ontvang gratis offerte en schatting van uw verschuiven.. Nogmaals bedankt voor je berichten, het heeft me veel geholpen. Ik heb mijn laptop uit en ben het maken [url=http://www.bcloghouses.be/crm/database/datebase.php]Ray Ban Online[/url] van aantekeningen. Ik hou van dit forum, jullie rock. Volgorde naast elkaar gevogelte secties op een spies een [url=http://www.paletopheusden.nl/access/config.php]Nike Shox Nederland[/url] soort duim opzij. We zijn prime kwaliteit distributeurs, uw diensten, [url=http://www.chartercrew.nl/new/t3-assets/config.php]Hollister Den Haag[/url] zoals het verre oosten Rhinestone diamanten juwelen Vast, Roestvrij Enkelband met betrekking tot toezicht op individuele. Voor het kopen van de AC, mensen moeten worden bezorgd over de kenmerken en de prijs of het wo. AmazonUK mijn bestelling geannuleerd; hun leverancier kon niet krijgen na alles. AltaVista internationale muziek directory daagden drie winkels die Vacuum cd's had, maar men zou niet het schip van de Britse Eilanden. Interessant, _Plutonium Cathedral_ opgedoken vaakst. "We weten hoe hard ze hebben gewerkt, in de kooien en in de praktijk", zei hoofdcoach Tim Tadlock. "Om te zien het vertrouwen terugkomen om ze een beetje was echt goed om te zien. Het is leuk om een ??algehele spel waar we gooide ook de bullpen en zwaaide de vleermuizen goed te zien.". Welke grafische app gebruik je? Ze zijn korrelig. Je zou kunnen hebben om het imago weer op te bouwen maar er is geen noodzaak voor de transparante achtergrond als de hele site heeft een zwarte achtergrond. Als uw app een matte wghen slaan als een gif kan geven geef het een blck een en dat alleen al zou moeten helpen bij het schoonmaken van de rafelige randen van het.. Je moet ook gebruik maken van een goede grammatica en spelling. Met betrekking tot SEO, het helpt om fantastische artikelen [url=http://www.aves-internet.nl/congres/content.html]Jordans[/url] te krijgen op uw eigen site. Belangrijkste model van Fisker, de Karma, is waarschijnlijk helaas genoemd, zoals het bedrijf zeker te kampen met pech deze dag. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242599,242599#msg-242599 From nginx-forum at nginx.us Thu Sep 5 21:35:10 2013 From: nginx-forum at nginx.us (bomwfabk) Date: Thu, 05 Sep 2013 17:35:10 -0400 Subject: NH Hollister Nederland en vindt troost in het kort een mysterieuze man genaamd Nicholas die haar zowel in het echte leven en in haar dromen troost. Message-ID: <2c7f2fdba4e656cd5a6bad356966eefe.NginxMailingListEnglish@forum.nginx.org> De hele tijd [url=http://www.spring-muziek.nl/muziek/spring-jazz/redman/js_thumbs/footer.php]Hollister Nederland[/url] zoeken voor hogescholen die gecertificeerd zijn. Er zijn veel van deze beschikbaar. Studeren in zij redelijkerwijs de door hem daklozen in veiligheid. [url=http://www.sign-d-sign.be/default/error.php]Hollister Belgium[/url] Bij het Cambridge Vereniging voor de Paranormal (CSP), Tom en een team van mediums: Ian en Lenora zichzelf, een helderziende, een occultist en een pittige vrouw die aura's ziet, probeer Lenora redden van een wisse dood. Haar vooruitzichten lijkt grimmig, met slechts twee opties: worden genomen door de entiteit, of zelfmoord plegen. Zij vreest beide uitkomsten, natuurlijk, en vindt troost in het kort een mysterieuze man genaamd Nicholas die haar zowel in het echte leven en in haar dromen troost.. Op de HCM Stad Stock Exchange, de VNIndex trok 1,24 procent tot 482,89 punten. Verliezers. [Lees [url=http://www.bcloghouses.be/crm/install/search.php]Longchamp Online[/url] meer].. Ik kon grillig boos groeien naar massaproductie afval. En toch blijf ik verbijsterd door de productie van dingen overwogen om onmiddellijk worden weggegooid, ik meevoelen met afval. RSS-feeds op deze manier fijne details van uw diensten en producten met foto's, informatie en prijzen. Als je een slechte visie en geen bril, kunt u een meetlint [url=http://www.aves-internet.nl/congres/content.html]Jordans[/url] lezen. U kunt onderdeelnummers zien op kratten. Je leven heeft weinig kans op het krijgen boven het graven van sloten. Hij ondertekende omhoog bij mijn netwerk en hielp me met een ander probleem. Even voor de duidelijk (no pun intended), je zegt het profiel doos niet transparant is geen bug? Zie nu, dat is een [url=http://www.paletopheusden.nl/default/header.php]Air Max 1[/url] probleem Ning zal hebben op te lossen. Ik heb niet de intentie van het hebben om te zoeken naar elke functie en de functie die moet een deel van het platform. Bij gebruik in een georganiseerde en geplande manier, echt blijkt gunstig voor de eindgebruikers te richten op een effectieve manier. Uw arts kan innemen van het geneesmiddel voorafgaand aan seksuele activiteit of kan je op een dagelijkse regime. 4.. Door het plaatsen stemt u als enige verantwoordelijk voor de inhoud van alle [url=http://www.dutchprowrestling.nl/idealcheckout/page.php]Ray Ban Zonnebril[/url] informatie die je draagt, link [url=http://www.paletopheusden.nl/default/header.php]Nike Air Max 1[/url] naar, of anderszins uploaden naar de website en het vrijkomen van Cisco van elke aansprakelijkheid met betrekking tot uw gebruik van de website. De commentaren worden gemodereerd. Hij, samen met een van de twee overleden piloten, waren oude CIA-agenten, terug te gaan naar hun dagen van de exploitatie "Air America" ??vluchten in Zuidoost-Azi? tijdens de Vietnamoorlog. U kunt zelfmoord plegen of zelfs proberen het terwijl je dit doet. Want uiteindelijk gaat het om weg te gaan. Het zal. Gooi de verf als gevaarlijk afval, of met latex verf kunt laten buiten voor een jaar verdampen. Vervolgens kunt u gooi het met de rest van uw afval. Seuss boek. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242600,242600#msg-242600 From nginx-forum at nginx.us Fri Sep 6 08:48:25 2013 From: nginx-forum at nginx.us (ixos) Date: Fri, 06 Sep 2013 04:48:25 -0400 Subject: Return file when it's in cache/check if file exists in cache In-Reply-To: <9a2ab6ec02a3dfcc03969702947e9d74.NginxMailingListEnglish@forum.nginx.org> References: <9a2ab6ec02a3dfcc03969702947e9d74.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4a016495817d2d37d30bdc32ef01c57c.NginxMailingListEnglish@forum.nginx.org> Ok, maybe not so beautiful solution but upstream can be used with one server makred as down. http { upstream backend-jail { server 0.0.0.0 down; } server { listen 80; underscores_in_headers on; recursive_error_pages on; error_page 597 = @jail; location / { if ($http_x_backend_down = "1") { return 597; } proxy_pass http://lcoalhost:8080; proxy_set_header Host $host; proxy_cache my-cache; proxy_cache_valid 200 302 1h; proxy_cache_valid 404 1m; proxy_cache_key $uri$is_args$args; } location @jail { # dont need to log error about 'no live upstreams' error_log /dev/null crit; # backend-jail always return 502, want 404 to be returned. error_page 502 =404 /; proxy_pass http://backend-jail; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242488,242618#msg-242618 From biot023 at gmail.com Fri Sep 6 09:45:32 2013 From: biot023 at gmail.com (doug livesey) Date: Fri, 6 Sep 2013 10:45:32 +0100 Subject: Best way to protect an http service with nginx (possibly encryption?) Message-ID: Hi -- I'm an nginx newbie who is looking to maybe use nginx to provide security for an HTTP service that previously ran in a trusted environment, but that now needs to run on the open web. I was thinking of having nginx listen on an arbitrary port, authenticate requests to the service on that port, then proxy them on to the service. I guess my first question is -- is this a correct use of nginx? My research so far suggests that it is. And my second question is -- what is the best way to achieve this? I thought maybe encrypting a username and password as part of the request (in a cookie?) and using agentzh might be an approach, but I am rather out of my depth, and would really appreciate any tips or references to docs that might help we work it all out. Or if anyone's read a good book that covers this, that would be a very appreciated recommendation! :) Thanks very much, Doug. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 6 23:42:39 2013 From: nginx-forum at nginx.us (justin) Date: Fri, 06 Sep 2013 19:42:39 -0400 Subject: Using if statements in a location block with set Message-ID: <6554ce814ee4a72270fecf9095941700.NginxMailingListEnglish@forum.nginx.org> Is the following going to work as expected: location /v1/users { rewrite ^/users/(.*)/accounts$ /v1/users/$1/accounts break; if ($server_name = 'js.mydomain.com') { set $backend "api.mydomain.com"; } if ($server_name = 'js-s.mydomain.com') { set $backend "api-s.mydomain.com"; } proxy_pass https://$backend; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242643,242643#msg-242643 From nginx-forum at nginx.us Sat Sep 7 04:12:14 2013 From: nginx-forum at nginx.us (anco) Date: Sat, 07 Sep 2013 00:12:14 -0400 Subject: can i run nginx caching and http server on same box?(newbie) Message-ID: <73139cee83cb9d6035cc5d6dcaef68b8.NginxMailingListEnglish@forum.nginx.org> can i run nginx caching and http server on same box? can i set caching per directory and/or per server block? how can i check what is actually being cached? logs? should i place php scripts outside root location? disclosure: newbie here aiming to ask the right questions(i have lots of them, the above should get me started). thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242644,242644#msg-242644 From nginx-forum at nginx.us Sun Sep 8 11:18:06 2013 From: nginx-forum at nginx.us (George) Date: Sun, 08 Sep 2013 07:18:06 -0400 Subject: Turn off Nginx SPDY ? Message-ID: <887b01dad20e635d6e71283bdfa05962.NginxMailingListEnglish@forum.nginx.org> I want to test non-SPDY vs SPDY performance for Nginx and I have Nginx compiled with SPDY support and it's enabled by adding to listen directive the spdy option as per http://nginx.org/en/docs/http/ngx_http_spdy_module.html. I thought that omitting the spdy option would disable SPDY temporarily ? But it seems spdycheck.org still reports the https:// site supports SPDY and browser shows site with SPDY support even with the spdy line removed from listen directive ? Or is only way to disable, is to recompile Nginx without SPDY support ? cheers Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242665,242665#msg-242665 From vbart at nginx.com Sun Sep 8 11:47:51 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 8 Sep 2013 15:47:51 +0400 Subject: Turn off Nginx SPDY ? In-Reply-To: <887b01dad20e635d6e71283bdfa05962.NginxMailingListEnglish@forum.nginx.org> References: <887b01dad20e635d6e71283bdfa05962.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201309081547.51297.vbart@nginx.com> On Sunday 08 September 2013 15:18:06 George wrote: > I want to test non-SPDY vs SPDY performance for Nginx and I have Nginx > compiled with SPDY support and it's enabled by adding to listen directive > the spdy option as per > http://nginx.org/en/docs/http/ngx_http_spdy_module.html. > > I thought that omitting the spdy option would disable SPDY temporarily ? Without the parameter on listen directive SPDY is disabled. > But it seems spdycheck.org still reports the https:// site supports SPDY > and browser shows site with SPDY support even with the spdy line removed > from listen directive ? There are only three possibilities for this: 1. You have not reloaded the configuration after it was changed, then nginx is working with the old one. 2. You have removed parameter only from one listen directive but there is another one. It works per addr:port pair (like most of listen parameters), not per server block. 3. Your browser and spdycheck.org is lying about current status. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sun Sep 8 11:56:37 2013 From: nginx-forum at nginx.us (George) Date: Sun, 08 Sep 2013 07:56:37 -0400 Subject: Turn off Nginx SPDY ? In-Reply-To: <201309081547.51297.vbart@nginx.com> References: <201309081547.51297.vbart@nginx.com> Message-ID: <9d889c0b756a4390ab43b53d1f589648.NginxMailingListEnglish@forum.nginx.org> I see i believe my problem is #2 as i have another vhost with spdy enabled on same addr:port pairing ! Thanks Valentin :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242665,242668#msg-242668 From nginx-forum at nginx.us Sun Sep 8 15:18:55 2013 From: nginx-forum at nginx.us (mex) Date: Sun, 08 Sep 2013 11:18:55 -0400 Subject: can i run nginx caching and http server on same box?(newbie) In-Reply-To: <73139cee83cb9d6035cc5d6dcaef68b8.NginxMailingListEnglish@forum.nginx.org> References: <73139cee83cb9d6035cc5d6dcaef68b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ee104b092e1d1eeb356adbac6aa007e.NginxMailingListEnglish@forum.nginx.org> the answer is yes: http://wiki.nginx.org Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242644,242670#msg-242670 From nginx-forum at nginx.us Sun Sep 8 17:50:58 2013 From: nginx-forum at nginx.us (mex) Date: Sun, 08 Sep 2013 13:50:58 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY Message-ID: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> hi list, i recently had to dig deeper into nginx + ssl-setup and came up with a short documentation on how to setup and run nginx as SSL-Gateway/Offload, including SPDY. beside basic configuration this guide covers HSTS-Headers, Perfect Forward Secrecy(PFS) and the latest and greatest ssl-based attacks like CRIME, BEAST, and Lucky Thirteen. Link: http://www.mare-system.de/blog/page/1378546400/ the reason for this 321th guide to nginx+ssl: i did not found any valid source that covers all aspects, including spdy and hsts, so i made this collection and will keep it updated. comments and critics appreciated regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242672#msg-242672 From nginx-forum at nginx.us Sun Sep 8 18:44:08 2013 From: nginx-forum at nginx.us (adambot) Date: Sun, 08 Sep 2013 14:44:08 -0400 Subject: drupal clean urls Message-ID: <06767de0f7834a764a481ebbd01397b7.NginxMailingListEnglish@forum.nginx.org> Greetings I have followed all the instructions in the wiki, and when i set my drupal installations as the root, everything works, however, when i move my drupal into a folder i am able to get everything to work except clean urls. Here is my drupal config: location /blog { # This is cool because no php is touched for static content try_files $uri @rewrite; } location @rewrite { # You have 2 options here # For D7 and above: # Clean URLs are handled in drupal_environment_initialize(). #rewrite ^ /blog/index.php; # For Drupal 6 and bwlow: # Some modules enforce no slash (/) at the end of the URL # Else this rewrite block wouldn't be needed (GlobalRedirect) rewrite ^blog/(.*)$ blog/index.php?q=$1; } location ~ blog/.*\.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; } # Fighting with Styles? This little gem is amazing. # This is for D6 #location ~ ^/sites/.*/files/imagecache/ { # This is for D7 and D8 location ~ ^/blog/sites/.*/files/styles/ { try_files $uri @rewrite; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } Here is the error i am seeing in my log: [error] 12988#0: *1 open() "/var/www/html/blog/linux" failed (2: No such file or directory), client: 192.168.1.1, server: localhost, request: "GET /blog/linux HTTP/1.1", host: "example.com", referrer: "http://example.com/blog/" (names have been changed to protect the innocent ;) any help is appreciated. thanks! Adam Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242673,242673#msg-242673 From francis at daoine.org Sun Sep 8 21:10:20 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 8 Sep 2013 22:10:20 +0100 Subject: drupal clean urls In-Reply-To: <06767de0f7834a764a481ebbd01397b7.NginxMailingListEnglish@forum.nginx.org> References: <06767de0f7834a764a481ebbd01397b7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130908211020.GE19345@craic.sysops.org> On Sun, Sep 08, 2013 at 02:44:08PM -0400, adambot wrote: Hi there, not tested, and I don't know what exactly drupal expects, but... > location @rewrite { > rewrite ^blog/(.*)$ blog/index.php?q=$1; That "rewrite" line is unlikely to do anything. The uri that "rewrite" tests starts with a /, so "^blog" will never match. Perhaps replace "blog" with "/blog" twice, and see if that does what you want? Separate from that, and this is more "busy work" than "actually broken"... > location ~ blog/.*\.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; That line probably does nothing useful. There aren't many urls that both end in ".php" and contain the string ".php/". Something like /blog/one.php/two.php *would* match -- but does your drupal use urls like that? > Here is the error i am seeing in my log: > [error] 12988#0: *1 open() "/var/www/html/blog/linux" failed (2: No such > file or directory), client: 192.168.1.1, server: localhost, request: "GET > /blog/linux HTTP/1.1", host: "example.com", referrer: > "http://example.com/blog/" I suspect that if you turned on the debug log, you'd see what rewrites were actually used; and you could match that against what you expect nginx to do. But the blog -> /blog change may be enough to get things going. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Mon Sep 9 00:22:56 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 8 Sep 2013 17:22:56 -0700 Subject: [ANN] ngx_openresty devel version 1.4.2.5 released Message-ID: Hello folks! I am happy to announce that the new development version of ngx_openresty, 1.4.2.5, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.4.2.3: * upgraded SetMiscNginxModule to 0.22. * bugfix: we did not escape "\0", "\z", "\b", and "\t" properly in set_quote_sql_str according to the MySQL quoting rules. thanks Siddon Tang for the report. * upgraded LuaNginxModule to 0.8.8. * feature: added new option "always_forward_body" to ngx.location.capture() and ngx.location.capture_multi(), which controls whether to always forward the parent request's request body to the subrequest (even when the subrequest is not of the POST or PUT request method). thanks Matthieu Tourne for the request. * feature: now timeout errors in tcpsock:receive() and tcpsock:receiveuntil() no longer automatically close the current cosocket object (for both upstream and downstream connections). thanks Aviram Cohen for the original patch. * bugfix: we did not escape "\0", "\z", "\t", and "\b" properly in ngx.quote_sql_str(). thanks Siddon Tang for the report. * bugfix: Lua backtrace dumps upon uncaught Lua exceptions did not work with the standard Lua 5.1 interpreter when the backtrace was deeper than 22 levels. * change: now we just dump the top 22 levels in the backtrace for uncaught Lua exceptions for the sake of simplicity. * change: we now limit the number of nested coroutines in the backtrace dump for uncaught Lua exceptions by 5. * optimize: grouped the Lua string concatenation operations when constructing the backtrace string for uncaught Lua exceptions. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From andrew.s.martin at gmail.com Mon Sep 9 04:01:34 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Sun, 8 Sep 2013 23:01:34 -0500 Subject: Rewrite URL to only show value of $_GET argument Message-ID: Hello, I have read through the nginx rewrite documentation and looked at various examples, but can't figure out how to create a rewrite rule for the following (if it is possible). I'd like to rewrite the URL of a php file with a $_GET argument and replace it with just the value of the $_GET argument. For example, I'd like to replace /index.php?title=my_example_title with /my_example_title or /article/my_example_title. I've tried several regular expressions to match index.php, as well as the $args and $arg_title nginx variables, but cannot get this working. For example: rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; Can anyone provide inside into how to correctly rewrite this type of URL? Thanks, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 9 05:24:46 2013 From: nginx-forum at nginx.us (adambot) Date: Mon, 09 Sep 2013 01:24:46 -0400 Subject: drupal clean urls In-Reply-To: <20130908211020.GE19345@craic.sysops.org> References: <20130908211020.GE19345@craic.sysops.org> Message-ID: <2b43957810f804c8887311938a9a898e.NginxMailingListEnglish@forum.nginx.org> Changing the rewrite from blog to /blog worked perfectly -- thanks for the second set of eyes :) I'm not sure on the busy-work part -- i took most of the stuff from the nginx wiki on drupal (still learning config files, been running nginx for about 24 hours now) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242673,242677#msg-242677 From nginx-forum at nginx.us Mon Sep 9 06:58:50 2013 From: nginx-forum at nginx.us (mex) Date: Mon, 09 Sep 2013 02:58:50 -0400 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: Message-ID: > rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; this doesnt work? what is $1 then in the redirected request? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242676,242678#msg-242678 From me at myconan.net Mon Sep 9 08:57:12 2013 From: me at myconan.net (edogawaconan) Date: Mon, 9 Sep 2013 17:57:12 +0900 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: Message-ID: On Mon, Sep 9, 2013 at 3:58 PM, mex wrote: >> rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; > > this doesnt work? what is $1 then in the redirected request? > of course this won't work. Query string isn't part of rewrite matching string. Use $arg_title instead. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From stadtpirat11 at ymail.com Mon Sep 9 09:22:50 2013 From: stadtpirat11 at ymail.com (- -) Date: Mon, 9 Sep 2013 02:22:50 -0700 (PDT) Subject: Secure permission structure for server blocks? Message-ID: <1378718570.69789.YahooMailNeo@web140505.mail.bf1.yahoo.com> Hello everybody, I am trying to wrap my head around this for weeks now. What is the most secure way to organise the permissions of the web root directories (WRD) for several server blocks. Especially when you have PHP applications like Wordpress that download and create files in the WRD? Latter makes it difficult to control the file's owner, group and permissions. For as "secure" is the following in my understanding: Hijacked websites (e.g. injections in Wordpress) must not be able to read or write do any other directory outside it's own WRD! I am open for more security tips, but the main topic is about directory permission structure. I haven't found any solution to my problem in the web, yet. Thank you Stadtpirat From reallfqq-nginx at yahoo.fr Mon Sep 9 09:29:29 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 9 Sep 2013 05:29:29 -0400 Subject: Secure permission structure for server blocks? In-Reply-To: <1378718570.69789.YahooMailNeo@web140505.mail.bf1.yahoo.com> References: <1378718570.69789.YahooMailNeo@web140505.mail.bf1.yahoo.com> Message-ID: Since the problem comes from the dynamic language PHP, you can create several pools using different user/group pairs. You could use 644 (or 640) permissions with user = PHP user on a specific directory and group = Web server group with read-only permissions. Raw idea of the big picture, There must be some details to check (such as verify PHP isolation/jail/chroot inside pools). My (quick) 2 cents, ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 9 09:51:14 2013 From: nginx-forum at nginx.us (litux) Date: Mon, 09 Sep 2013 05:51:14 -0400 Subject: Nginx update problem Message-ID: <01380226982f16605c808797b403c43f.NginxMailingListEnglish@forum.nginx.org> When trying to update the nginx package on debian 8.0 jessie This is the error message `Preparing to replace nginx 1.4.1-3 (using .../nginx_1.4.2-1~squeeze_amd64.deb) ... Unpacking replacement nginx ... dpkg: error processing /var/cache/apt/archives/nginx_1.4.2-1~squeeze_amd64.deb (--unpack): trying to overwrite '/etc/logrotate.d/nginx', which is also in package nginx-common 1.4.1-3 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/nginx_1.4.2-1~squeeze_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)` These nginx packages are installed ii nginx 1.4.1-3 all small, powerful, scalable web/proxy server ii nginx-common 1.4.1-3 all small, powerful, scalable web/proxy server - common files ii nginx-full 1.4.1-3 amd64 nginx web/proxy server (standard version) These are the apt sources for nginx deb http://nginx.org/packages/debian/ squeeze nginx deb-src http://nginx.org/packages/debian/ squeeze nginx Any ideas how to solve this problem, as I understand can't upgrade the apt sources for nginx as these are not working Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242684,242684#msg-242684 From sb at nginx.com Mon Sep 9 12:41:53 2013 From: sb at nginx.com (Sergey Budnevitch) Date: Mon, 9 Sep 2013 16:41:53 +0400 Subject: Nginx update problem In-Reply-To: <01380226982f16605c808797b403c43f.NginxMailingListEnglish@forum.nginx.org> References: <01380226982f16605c808797b403c43f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <780F075E-6715-4B51-A92B-B372A448E766@nginx.com> On 9 Sep2013, at 13:51 , litux wrote: > When trying to update the nginx package on debian 8.0 jessie > > This is the error message > > `Preparing to replace nginx 1.4.1-3 (using > .../nginx_1.4.2-1~squeeze_amd64.deb) ... > Unpacking replacement nginx ... > dpkg: error processing > /var/cache/apt/archives/nginx_1.4.2-1~squeeze_amd64.deb (--unpack): > trying to overwrite '/etc/logrotate.d/nginx', which is also in package > nginx-common 1.4.1-3 > dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) > Errors were encountered while processing: > /var/cache/apt/archives/nginx_1.4.2-1~squeeze_amd64.deb > E: Sub-process /usr/bin/dpkg returned an error code (1)` > > These nginx packages are installed > > ii nginx 1.4.1-3 > all small, powerful, scalable web/proxy server > ii nginx-common 1.4.1-3 > all small, powerful, scalable web/proxy server - common files > ii nginx-full 1.4.1-3 > amd64 nginx web/proxy server (standard version) > > These are the apt sources for nginx > > deb http://nginx.org/packages/debian/ squeeze nginx > deb-src http://nginx.org/packages/debian/ squeeze nginx > > Any ideas how to solve this problem, as I understand can't upgrade the apt > sources for nginx as these are not working Debian nginx packages (nginx, nginx-common and nginx-full) conflict with our nginx package. a) nginx_1.4.2-1~squeeze_amd64.deb was not tested on debian 8.0 jessie, use it at your own risk. b) remove nginx, nginx-common and nginx-full and try to reinstall. From andrew.s.martin at gmail.com Mon Sep 9 12:51:59 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Mon, 9 Sep 2013 07:51:59 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: Message-ID: Thanks for the suggestions. I was not able to get $arg_title to work. Here is the relevant section of my nginx config: server_name mysite.com; try_files $uri $uri/ index.php; location / { rewrite ^/index\.php?title=(.*)$ http://mysite.com/$arg_title redirect; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } I also tried the rewrite rule inside of the "location ~ \.php$" block, but it didn't work there either. Visiting mysite.com/index.php?title=my_test_page just loads that URL, it does not redirect to mysite.com/my_test_page. Moreover, visiting mysite.com/my_test_page results in a 404. What else should I try to make this rewrite rule work? Thanks, Andrew On Mon, Sep 9, 2013 at 3:57 AM, edogawaconan wrote: > On Mon, Sep 9, 2013 at 3:58 PM, mex wrote: > >> rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; > > > > this doesnt work? what is $1 then in the redirected request? > > > > of course this won't work. Query string isn't part of rewrite matching > string. > > Use $arg_title instead. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Mon Sep 9 13:05:18 2013 From: artemrts at ukr.net (wishmaster) Date: Mon, 09 Sep 2013 16:05:18 +0300 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: Message-ID: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> --- Original message --- From: "Andrew Martin" Date: 9 September 2013, 15:53:01 > Thanks for the suggestions. I was not able to get $arg_title to work. Here is the relevant section of my nginx config:? ? ? ? server_name mysite.com; > > > > > try_files $uri $uri/ index.php; > > > location / { > rewrite ^/index\.php?title=(.*)$ http://mysite.com/$arg_title redirect; > May be something like this : rewrite ^/index\.php.* http://mysite.com/$arg_title redirect; I think nginx "know" about all arguments in your request, therefore simple specify needed argument's name in the second part of rewrite rule. From andrew.s.martin at gmail.com Mon Sep 9 13:23:36 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Mon, 9 Sep 2013 08:23:36 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> Message-ID: If I use this line: rewrite ^/index\.php(.*)$ http://mysite.com/$arg_title? redirect; /index.php?title=my_test_page redirects to /my_test_page This is the URL I am looking for, but it still results in a 404, with this displayed in the log: [error] 16077#0: *156649 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client xxx.xxx.xxx.xxx, server: mysite.com, request: "GET /my_test_page HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "mysite.com" Thanks, Andrew On Mon, Sep 9, 2013 at 8:05 AM, wishmaster wrote: > > > --- Original message --- > From: "Andrew Martin" > Date: 9 September 2013, 15:53:01 > > > > Thanks for the suggestions. I was not able to get $arg_title to work. > Here is the relevant section of my nginx config: server_name > mysite.com; > > > > > > > > > > try_files $uri $uri/ index.php; > > > > > > location / { > > rewrite ^/index\.php?title=(.*)$ http://mysite.com/$arg_title redirect; > > > May be something like this : > rewrite ^/index\.php.* http://mysite.com/$arg_title redirect; > I think nginx "know" about all arguments in your request, therefore simple > specify needed argument's name in the second part of rewrite rule. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aweber at comcast.net Mon Sep 9 13:53:54 2013 From: aweber at comcast.net (AJ Weber) Date: Mon, 09 Sep 2013 09:53:54 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <522DD2F2.4080904@comcast.net> This is a nice write-up. Thank you. Does anyone know why SPDY is not enabled for the default builds yet, if it's in the "stable branch"? I just tried downloading 1.4.2 (CentOS 6 x64) and it's not configured. Thanks, AJ On 9/8/2013 1:50 PM, mex wrote: > hi list, > > i recently had to dig deeper into nginx + ssl-setup and came up with a > short documentation on how to setup and run nginx as SSL-Gateway/Offload, > including SPDY. beside basic configuration this guide covers HSTS-Headers, > Perfect Forward Secrecy(PFS) and the latest and greatest ssl-based attacks > like > CRIME, BEAST, and Lucky Thirteen. > > Link: http://www.mare-system.de/blog/page/1378546400/ > > the reason for this 321th guide to nginx+ssl: i did not found any valid > source that covers all aspects, including spdy and hsts, so i made this > collection and will keep it updated. > > comments and critics appreciated > > > > regards, > > > mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242672#msg-242672 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-list at puzzled.xs4all.nl Mon Sep 9 15:10:00 2013 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Mon, 09 Sep 2013 17:10:00 +0200 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <522DD2F2.4080904@comcast.net> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> <522DD2F2.4080904@comcast.net> Message-ID: <522DE4C8.9040905@puzzled.xs4all.nl> On 09/09/2013 03:53 PM, AJ Weber wrote: > This is a nice write-up. Thank you. > > Does anyone know why SPDY is not enabled for the default builds yet, if > it's in the "stable branch"? I just tried downloading 1.4.2 (CentOS 6 > x64) and it's not configured. My guess is that's because CentOS 6 does not have the newer openssl version 1.0.1 which is required for SPDY. Regards, Patrick From vbart at nginx.com Mon Sep 9 15:10:15 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 9 Sep 2013 19:10:15 +0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <522DD2F2.4080904@comcast.net> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> <522DD2F2.4080904@comcast.net> Message-ID: <201309091910.15143.vbart@nginx.com> On Monday 09 September 2013 17:53:54 AJ Weber wrote: > This is a nice write-up. Thank you. > > Does anyone know why SPDY is not enabled for the default builds yet, if > it's in the "stable branch"? I just tried downloading 1.4.2 (CentOS 6 > x64) and it's not configured. > It requires OpenSSL 1.0.1, while CentOS 6.4 only has 1.0.0. wbr, Valentin V. Bartenev From aweber at comcast.net Mon Sep 9 15:18:57 2013 From: aweber at comcast.net (AJ Weber) Date: Mon, 09 Sep 2013 11:18:57 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <201309091910.15143.vbart@nginx.com> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> <522DD2F2.4080904@comcast.net> <201309091910.15143.vbart@nginx.com> Message-ID: <522DE6E1.5040900@comcast.net> Ugh. Thanks. I missed that. -AJ On 9/9/2013 11:10 AM, Valentin V. Bartenev wrote: > On Monday 09 September 2013 17:53:54 AJ Weber wrote: >> This is a nice write-up. Thank you. >> >> Does anyone know why SPDY is not enabled for the default builds yet, if >> it's in the "stable branch"? I just tried downloading 1.4.2 (CentOS 6 >> x64) and it's not configured. >> > It requires OpenSSL 1.0.1, while CentOS 6.4 only has 1.0.0. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From paulnpace at gmail.com Mon Sep 9 19:38:01 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Mon, 9 Sep 2013 12:38:01 -0700 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> Message-ID: We had a discussion on this list recently about using gzip in the SSL block. On Aug 17 Igor Sysoev wrote: >You have to split the dual mode server section into two server server sections and set "gzip off" >SSL-enabled on. There is no way to disable gzip in dual mode server section, but if you really >worry about security in general the server sections should be different. On Sun, Sep 8, 2013 at 10:50 AM, mex wrote: > hi list, > > i recently had to dig deeper into nginx + ssl-setup and came up with a > short documentation on how to setup and run nginx as SSL-Gateway/Offload, > including SPDY. beside basic configuration this guide covers HSTS-Headers, > Perfect Forward Secrecy(PFS) and the latest and greatest ssl-based attacks > like > CRIME, BEAST, and Lucky Thirteen. > > Link: http://www.mare-system.de/blog/page/1378546400/ > > the reason for this 321th guide to nginx+ssl: i did not found any valid > source that covers all aspects, including spdy and hsts, so i made this > collection and will keep it updated. > > comments and critics appreciated > > > > regards, > > > mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242672#msg-242672 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Sep 10 03:49:05 2013 From: nginx-forum at nginx.us (Russ) Date: Mon, 09 Sep 2013 23:49:05 -0400 Subject: There are cosplay costumes which will make you more charming! Message-ID: Cosplay costumes differ significantly and can range from simple designed costumes to extremely specific costumes. Cosplay is generally regarded different from Hallow's eve and Mardi Gras costumes wear, as the objective is to perfectly duplicate a specific personality, rather than to indicate the lifestyle and meaning of a holiday event. ------------------------------------- There are large varieties of cosplay costumes waiting for you:http://www.cosplaydeal.com/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242705,242705#msg-242705 From nginx-forum at nginx.us Tue Sep 10 03:50:01 2013 From: nginx-forum at nginx.us (Russ) Date: Mon, 09 Sep 2013 23:50:01 -0400 Subject: Boys and girls,look here,cheap cosplay costumes are here! Message-ID: <43ef3ff3838f09d54a4178baf00e9e63.NginxMailingListEnglish@forum.nginx.org> As such, when in costumes, cosplayers will often aim to look at the impact, gestures and gestures of the figures they represent (with "out of character" breaks). The figures selected to be cosplayed may be procured from any films, TV sequence, guides, comics, games or music groups, but the exercise of cosplay is often associated with copying cartoons and manga figures. ------------------------------------- There are large varieties of cosplay costumes waiting for you:http://www.cosplaydeal.com/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242706,242706#msg-242706 From lists at ruby-forum.com Tue Sep 10 07:28:09 2013 From: lists at ruby-forum.com (helluvanag ..) Date: Tue, 10 Sep 2013 09:28:09 +0200 Subject: How to solve the problem of "405 not allowed"? In-Reply-To: References: Message-ID: <8c54a720c75d6b651692e1cb731773b2@ruby-forum.com> Hi, In all the above posts a code snippet has been given to rectify the 405 error. But i wondering where exactly that code snippet has to be added(i mean in which file of the server box). If anyone can explain a bit elaborately, i would be grateful. Thanks and Regards, Nagender -- Posted via http://www.ruby-forum.com/. From Matthias.Sidler at gibb.ch Tue Sep 10 09:34:01 2013 From: Matthias.Sidler at gibb.ch (Matthias Sidler) Date: Tue, 10 Sep 2013 11:34:01 +0200 Subject: LDAP Auth only for external addresses Message-ID: <391001B1116D5D479774E418A871DEF40E1C38816D@spieu0446.gibb.int> Hi, I configured a nginx 1.5.4 server with the ngnix-auth-ldap module. It works all fine, but im looking for an option to distinguish the client networks. My goal is, that users from the network 10.*.*.* and 172.*.*.* don't have to authenticate as all the others. Is that possible ? Thanks in advance. -- Mat -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 10 11:21:11 2013 From: nginx-forum at nginx.us (cobain86) Date: Tue, 10 Sep 2013 07:21:11 -0400 Subject: upstart after nginx update not working Message-ID: hi i've created the upstart with the following config # nginx description "nginx http daemon" author "George Shammas " start on (filesystem and net-device-up IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/sbin/nginx env PID=/var/run/nginx.pid expect fork respawn respawn limit 10 5 #oom never pre-start script $DAEMON -t if [ $? -ne 0 ] then exit $? fi end script exec $DAEMON And thats how i made the nginx update #Basically it boils down to the following: #Get PID of old master process by inspecting output of ps: ps auxwww | grep nginx #Now, install the new binary: wget http://nginx.org/download/nginx-.tar.gz tar -xzf nginx-.tar.gz cd nginx- ./configure \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/lock/subsys/nginx \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_geoip_module \ --http-client-body-temp-path=/var/cache/nginx/client_body_temp \ --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_tempmake make sudo make install #Advise old master process to start a new master process using the updated binary: sudo kill -s USR2 2941 #Gracefully shut down old worker processes: sudo kill -s WINCH 2941 #Gracefully exit old master process: sudo kill -s QUIT 2941 So after the update the Upstart is'nt working anymore for nginx. I have to stop the service manually and start it again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242719,242719#msg-242719 From nginx-forum at nginx.us Tue Sep 10 11:32:31 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 10 Sep 2013 07:32:31 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: References: Message-ID: hi, thanx everybody for comments. a guid on howto nginx + authorization via client certs will be included in the next version of this document i'll investigate that gzip-comment, but from what i read so far: http-compression even in https is ok, while ssl/tls-compression is not; i'l include any findings and solution, but i'm not finished with that yet. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242721#msg-242721 From nginx-forum at nginx.us Tue Sep 10 15:59:27 2013 From: nginx-forum at nginx.us (mirceapreotu) Date: Tue, 10 Sep 2013 11:59:27 -0400 Subject: How to process Facebook signed request to determine proxy target? In-Reply-To: <86587b038728bf0418d9dd1f7b9917f6.NginxMailingListEnglish@forum.nginx.org> References: <86587b038728bf0418d9dd1f7b9917f6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Ralf, I'm currently working on a similar functionality. Have you managed to find a solution? Thanks, Mircea Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222397,242728#msg-242728 From francis at daoine.org Tue Sep 10 16:46:57 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Sep 2013 17:46:57 +0100 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> Message-ID: <20130910164657.GF19345@craic.sysops.org> On Mon, Sep 09, 2013 at 08:23:36AM -0500, Andrew Martin wrote: Hi there, > If I use this line: > rewrite ^/index\.php(.*)$ http://mysite.com/$arg_title? redirect; > > /index.php?title=my_test_page redirects to /my_test_page That's what you asked for initially; I'd probably spell it as location = /index.php { return 302 http://mysite.com/$arg_title; } to make it clear what exactly is happening, and which might point out the parts you didn't specify: what should happen if I ask for any of /index.php?something=else /index.php?title=my_test_page&something=other /index.php ? With the above code, the second will possibly redirect the way you want, and the others probably won't. Also, what should happen when I ask for /my_test_page ? I will do that immediately after you redirect me there. If you can describe the complete behaviour you want, the nginx configuration needed to achieve it may become clear. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 10 16:54:51 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Sep 2013 17:54:51 +0100 Subject: LDAP Auth only for external addresses In-Reply-To: <391001B1116D5D479774E418A871DEF40E1C38816D@spieu0446.gibb.int> References: <391001B1116D5D479774E418A871DEF40E1C38816D@spieu0446.gibb.int> Message-ID: <20130910165451.GG19345@craic.sysops.org> On Tue, Sep 10, 2013 at 11:34:01AM +0200, Matthias Sidler wrote: Hi there, Untested, but... > I configured a nginx 1.5.4 server with the ngnix-auth-ldap module. It works all fine, but im looking for an option to distinguish the client networks. My goal is, that users from the network 10.*.*.* and 172.*.*.* don't have to authenticate as all the others. > Is that possible ? I'd expect "satisfy any" to be involved in the solution -- http://nginx.org/r/satisfy Do the notes at https://github.com/kvspb/nginx-auth-ldap/issues/7 help at all? I don't know which specific versions of code you're using; possibly newer versions don't need any workarounds. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 10 17:12:32 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 10 Sep 2013 18:12:32 +0100 Subject: Secure permission structure for server blocks? In-Reply-To: <1378718570.69789.YahooMailNeo@web140505.mail.bf1.yahoo.com> References: <1378718570.69789.YahooMailNeo@web140505.mail.bf1.yahoo.com> Message-ID: <20130910171232.GH19345@craic.sysops.org> On Mon, Sep 09, 2013 at 02:22:50AM -0700, - - wrote: Hi there, > I am trying to wrap my head around this for weeks now. What is the most secure way to organise the permissions of the web root directories (WRD) for several server blocks. Especially when you have PHP applications like Wordpress that download and create files in the WRD? Latter makes it difficult to control the file's owner, group and permissions. > nginx doesn't "do" php. Once you accept that, then the model you have for securing things may become clearer. The "nginx" user and the "php" user are completely separate, unless you choose to make them not be. One nginx runs as one user, and accesses the files it is told to. It needs to be able to read files in the web root directory that it serves directly. It does not (in general) know or care what php is, or where the files that the php server (typically, a fastcgi server) reads are. So your php-running user (configured in the fastcgi server) should be able to read the files it needs; and if you want it to write anything, it needs to be able to write those things. How you configure that is not nginx's concern. If your fastcgi server writes files that nginx must later serve, then nginx will need to be able to read them. If it doesn't, then nginx doesn't need to care. > For as "secure" is the following in my understanding: Hijacked websites (e.g. injections in Wordpress) must not be able to read or write do any other directory outside it's own WRD! I am open for more security tips, but the main topic is about directory permission structure. > So - let the php user not be able to write any file that php will read; but do let it be able to write files that nginx will serve. That should stop any php-side injections. If you also want to stop any javascript injections, you'll want to prevent the php user from writing any files that nginx will serve (and even that will probably not be enough). If you care about multiple web root directories or multiple name-based servers, then use multiple php users, and let each one only write to the appropriate places which nginx will read. (You can also use multiple nginx instances, each running as a different user, if you want to use that to also restrict what files can be served directly.) > I haven't found any solution to my problem in the web, yet. If you can write down what exactly you are trying to prevent, and what you are not trying to prevent, then the solution may become clear -- or it may become clear that it is not possible. But any uncertainty or lack of clarity in the requirements will make it hard to confirm that the proposed solution is adequate. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Sep 10 19:25:55 2013 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 10 Sep 2013 15:25:55 -0400 Subject: Transforming nginx for Windows In-Reply-To: <80701b53b3b2dd2453113cff08b409f2.NginxMailingListEnglish@forum.nginx.org> References: <201309020314.55179.vbart@nginx.com> <2368f0fcbc55c77ba27d61b5fb2155b6.NginxMailingListEnglish@forum.nginx.org> <80701b53b3b2dd2453113cff08b409f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4286164b770487944013346101e4b263.NginxMailingListEnglish@forum.nginx.org> 10:27 10-9-2013: B02 build Based on nginx 1.4.2 with; pcre-8.32 zlib-1.2.8 openssl-1.0.1e + Compiled with: FD_SETSIZE = 16384 (original Windows source files modified) + Now capable to handle C250K ! (with optimization registry file) + Added Windows optimization registry file, check your current values BEFORE setting the new ones + Added debug symbols file (let us know where it went wrong when you have a crash) + Added adjusted nginx(-win).conf for Windows + Added SPDY * Runs on Windows XP SP3 or higher, both 32 and 64 bit * Set priority to High for both nginx.exe processes * When nginx is running as a service: My computer -> Properties -> Advanced -> Performance -> Advanced -> Processor scheduling, Adjust for best performance set to background services * Website created for easy download: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242735#msg-242735 From andrew.s.martin at gmail.com Wed Sep 11 02:07:46 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Tue, 10 Sep 2013 21:07:46 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: <20130910164657.GF19345@craic.sysops.org> References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> <20130910164657.GF19345@craic.sysops.org> Message-ID: Francis, On Tue, Sep 10, 2013 at 11:46 AM, Francis Daly wrote: > On Mon, Sep 09, 2013 at 08:23:36AM -0500, Andrew Martin wrote: > > Hi there, > > > If I use this line: > > rewrite ^/index\.php(.*)$ http://mysite.com/$arg_title? redirect; > > > > /index.php?title=my_test_page redirects to /my_test_page > > That's what you asked for initially; I'd probably spell it as > > location = /index.php { > return 302 http://mysite.com/$arg_title; > } > > to make it clear what exactly is happening, and which might point out > the parts you didn't specify: > > what should happen if I ask for any of > > /index.php?something=else > /index.php?title=my_test_page&something=other > /index.php > > ? With the above code, the second will possibly redirect the way you want, > and the others probably won't. > Would it be possible to only redirect if the title $_GET variable is present? > > Also, what should happen when I ask for > > /my_test_page > > ? I will do that immediately after you redirect me there. > > If you can describe the complete behaviour you want, the nginx > configuration needed to achieve it may become clear. > Thanks for clarifying this. The complete behavior I'm looking for is just to create SEF URLs for pages on the site by hiding the index.php?title= part of the URL. Thus, visiting /my_test_page in your browser would internally load the index.php?title=my_test_page URL but display the SEF URL to the user. How can I achieve this behavior? Thanks, Andrew > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From etienne.champetier at free.fr Wed Sep 11 05:22:12 2013 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Wed, 11 Sep 2013 07:22:12 +0200 (CEST) Subject: goto @loc; / return @loc; In-Reply-To: <1395058318.505971583.1378875304237.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <489921999.506042684.1378876932565.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi Here is a simplified example of my nginx config location /loc1 { location ~ \.(css|js)$ { try_files $uri @loc1; } return 599; error_page 599 = @loc1; } location @loc1 { fastcgi_index index.aspx; # Page par defaut fastcgi_pass unix:/tmp/loc1.sock; } The idea is that I have over 20 location for 20 fastcgi applications. I want the .css & .js to be served by nginx if they exist else let the fastcgi app handle the request. The named location can be more complicated (i want to avoid copy/paste) First is my config ok (it's working but is it "best practice")? Second what do you think of a new "goto @loc1" or "return @loc1" directive? The return / error_page combo is a bit of a hack. I'm willing to write a patch if devs think it a good/valid use case. Thanks in advance From Matthias.Sidler at gibb.ch Wed Sep 11 07:20:58 2013 From: Matthias.Sidler at gibb.ch (Matthias Sidler) Date: Wed, 11 Sep 2013 09:20:58 +0200 Subject: AW: LDAP Auth only for external addresses In-Reply-To: <20130910165451.GG19345@craic.sysops.org> References: <391001B1116D5D479774E418A871DEF40E1C38816D@spieu0446.gibb.int> <20130910165451.GG19345@craic.sysops.org> Message-ID: <391001B1116D5D479774E418A871DEF40E1C3882EB@spieu0446.gibb.int> Thanks alot! That works for me: [...] location / { satisfy any; auth_ldap "Forbidden"; auth_ldap_servers myldap; auth_basic "Forbidden"; allow 10.0.0.0/8; allow 172.0.0.0/8; deny all; } [...] --- Mat From francis at daoine.org Wed Sep 11 12:43:11 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Sep 2013 13:43:11 +0100 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> <20130910164657.GF19345@craic.sysops.org> Message-ID: <20130911124311.GI19345@craic.sysops.org> On Tue, Sep 10, 2013 at 09:07:46PM -0500, Andrew Martin wrote: Hi there, > Would it be possible to only redirect if the title $_GET variable is > present? Yes. Use something like if ($arg_title != "") { return 302 http://mysite.com/$arg_title; } inside the "location = /index.php" block, and then continue with whatever should happen if $arg_title is empty. > Thanks for clarifying this. The complete behavior I'm looking for is just to > create SEF URLs for pages on the site by hiding the index.php?title= part > of the URL. Thus, visiting /my_test_page in your browser would internally > load the index.php?title=my_test_page URL but display the SEF URL to > the user. How can I achieve this behavior? I suspect that "try_files $uri /$uri /index.php;" might be enough for what you ask for here; if it isn't, then a description of what you do, what you see, and what you expect to see, will probably make it easier to understand where the problem is. (Your fastcgi-related configuration, and the php code itself, will determine whether it is enough.) If you search for nginx + your-php-application, do you see any documentation on how to create SEF URLs? It may be easier than me guessing here. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Sep 11 12:46:01 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Sep 2013 13:46:01 +0100 Subject: LDAP Auth only for external addresses In-Reply-To: <391001B1116D5D479774E418A871DEF40E1C3882EB@spieu0446.gibb.int> References: <391001B1116D5D479774E418A871DEF40E1C38816D@spieu0446.gibb.int> <20130910165451.GG19345@craic.sysops.org> <391001B1116D5D479774E418A871DEF40E1C3882EB@spieu0446.gibb.int> Message-ID: <20130911124601.GJ19345@craic.sysops.org> On Wed, Sep 11, 2013 at 09:20:58AM +0200, Matthias Sidler wrote: Hi there, > That works for me: Good to hear. Having the answer included on the list like this should also help the next person with the same issue. Cheers, f -- Francis Daly francis at daoine.org From andrew.s.martin at gmail.com Wed Sep 11 13:32:09 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Wed, 11 Sep 2013 08:32:09 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: <20130911124311.GI19345@craic.sysops.org> References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> <20130910164657.GF19345@craic.sysops.org> <20130911124311.GI19345@craic.sysops.org> Message-ID: Francis, Using the similar statement "try_files $uri $uri/ /index.php;", if I visit this URL: http://mysite.com/index.php?title=my_test_page then the URL is rewritten to this, but it just loads the contents of index.php (without the title variable): http://mysite.com/my_test_page What it shows would be equivalent to going to this page: http://mysite.com/index.php The part of my nginx configuration which communicates with php is: location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } The php code is a custom page, not a pre-built CMS. It is doing an ajax call to load the content, but should be functionally-equivalent to this: If I go to this page: http://mysite.com/index.php?title=my_test_page I would like the client's browser to instead show this URL: http://mysite.com/my_test_page Does this help clarify what I am looking for? Thanks, Andrew On Wed, Sep 11, 2013 at 7:43 AM, Francis Daly wrote: > On Tue, Sep 10, 2013 at 09:07:46PM -0500, Andrew Martin wrote: > > Hi there, > > > Would it be possible to only redirect if the title $_GET variable is > > present? > > Yes. > > Use something like > > if ($arg_title != "") { > return 302 http://mysite.com/$arg_title; > } > > inside the "location = /index.php" block, and then continue with whatever > should happen if $arg_title is empty. > > > Thanks for clarifying this. The complete behavior I'm looking for is > just to > > create SEF URLs for pages on the site by hiding the index.php?title= part > > of the URL. Thus, visiting /my_test_page in your browser would internally > > load the index.php?title=my_test_page URL but display the SEF URL to > > the user. How can I achieve this behavior? > > I suspect that "try_files $uri /$uri /index.php;" might be enough for > what you ask for here; if it isn't, then a description of what you do, > what you see, and what you expect to see, will probably make it easier > to understand where the problem is. (Your fastcgi-related configuration, > and the php code itself, will determine whether it is enough.) > > If you search for nginx + your-php-application, do you see any > documentation on how to create SEF URLs? It may be easier than me > guessing here. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Wed Sep 11 13:42:46 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Wed, 11 Sep 2013 17:42:46 +0400 Subject: Transforming nginx for Windows In-Reply-To: <4286164b770487944013346101e4b263.NginxMailingListEnglish@forum.nginx.org> References: <201309020314.55179.vbart@nginx.com> <2368f0fcbc55c77ba27d61b5fb2155b6.NginxMailingListEnglish@forum.nginx.org> <80701b53b3b2dd2453113cff08b409f2.NginxMailingListEnglish@forum.nginx.org> <4286164b770487944013346101e4b263.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> On Sep 10, 2013, at 11:25 PM, itpp2012 wrote: > 10:27 10-9-2013: B02 build > > Based on nginx 1.4.2 with; > pcre-8.32 > zlib-1.2.8 > openssl-1.0.1e > + Compiled with: FD_SETSIZE = 16384 (original Windows source files > modified) > + Now capable to handle C250K ! (with optimization registry file) > + Added Windows optimization registry file, check your current values BEFORE > setting the new ones > + Added debug symbols file (let us know where it went wrong when you have a > crash) > + Added adjusted nginx(-win).conf for Windows > + Added SPDY > * Runs on Windows XP SP3 or higher, both 32 and 64 bit > * Set priority to High for both nginx.exe processes > * When nginx is running as a service: My computer -> Properties -> Advanced > -> Performance -> > Advanced -> Processor scheduling, Adjust for best performance set to > background services > * Website created for easy download: http://nginx-win.ecsds.eu/ Just checking if you have any patches against nginx-1.4 or nginx-1.5 to share? Thanks! > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242735#msg-242735 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Wed Sep 11 14:31:17 2013 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Sep 2013 15:31:17 +0100 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> <20130910164657.GF19345@craic.sysops.org> <20130911124311.GI19345@craic.sysops.org> Message-ID: <20130911143117.GL19345@craic.sysops.org> On Wed, Sep 11, 2013 at 08:32:09AM -0500, Andrew Martin wrote: Hi there, > Using the similar statement "try_files $uri $uri/ /index.php;", if I visit > this URL: > http://mysite.com/index.php?title=my_test_page > then the URL is rewritten to this, but it just loads the contents of > index.php (without the title variable): When you say "the contents of", you mean "the unprocessed php", yes? In nginx, one request is handled in one location. You must put all of the configuration that you wish to apply to a request, in the one location that handles that request. This means that if you have a "location = /index.php", then if the request is for /index.php, your "location ~ php" will not be used. In this case, can I suggest that you use a slightly different approach, to keep separate things separate? Something like (untested): try_files $uri $uri/ @fallback; location = /index.php { # the browser requested /index.php if ($arg_title != "") { return 302 http://mysite.com/$arg_title; } fastcgi_pass 127.0.0.1:9000; include fastcgi_params; } location @fallback { # the browser requested something that is not on the nginx filesystem fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param QUERY_STRING title=$uri; include fastcgi_params; } but there will be rough edges there -- you may want the "include" line at the start of the @fallback stanza rather than the end, and you probably will need to tweak the QUERY_STRING param passed (to remove the leading / on $uri, most likely). You can test to find what exactly is needed. Probably enabling the debug log until you are certain that you understand what nginx is doing, will be helpful. > The php code is a custom page, not a pre-built CMS. It is doing an ajax > call to load the content, but should be functionally-equivalent to this: > > > > > if (isset($_GET['title'])) { > include($_GET['title'] . ".html"); > } else { > include("home.html"); > } > ?> > > > Can I suggest that you test initially with that exact code, rather than something that should be functionally equivalent? Keep it simple, so that you can see at what point in the sequence things first fail. (Even better: just do something like "print $_GET['title']", so you can see exactly what you received. After that works right, add the complexity.) If "ajax" means "makes further http requests of nginx", then you'll need to make sure that the first one works before trying the subsequent ones. > If I go to this page: > http://mysite.com/index.php?title=my_test_page > I would like the client's browser to instead show this URL: > http://mysite.com/my_test_page The "return" should do that. What you now also want is for, when you go to http://mysite.com/my_test_page, that nginx knows to tell the fastcgi server to process the index.php page with certain arguments. For that, you must configure nginx correctly. Keep a very clear picture of how the browser, nginx, and the fastcgi/php server communicate, and you'll be able to work out where things are not doing what you expect them to do; and then you may be able to see what to change, to get the to do what you want. > Does this help clarify what I am looking for? Building your own php framework, and making it work with nginx. If you search for how every other framework does this, you may find useful hints as to how your one will work. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Sep 11 17:22:37 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Sep 2013 13:22:37 -0400 Subject: Transforming nginx for Windows In-Reply-To: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> References: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> Message-ID: <020a1514ec8b1fe773557616b307c2a2.NginxMailingListEnglish@forum.nginx.org> nginxorg Wrote: ------------------------------------------------------- > Just checking if you have any patches against nginx-1.4 or nginx-1.5 > to share? > > Thanks! When the outstanding issues have been resolved all code will flow back into the community, my target is a Windows nginx version that can compile, perform and scale the same as the linux version. I've just setup a budget for some quality c++ programmers since my c++ knowledge is not enough to tackle everything. As asked before I would appreciate a technical explanation why windows issues are what they are as this would speed up working out a solution. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,242761#msg-242761 From nginx-forum at nginx.us Thu Sep 12 10:07:05 2013 From: nginx-forum at nginx.us (dfumagalli) Date: Thu, 12 Sep 2013 06:07:05 -0400 Subject: Compressed page streams detection and optimization Message-ID: Hello, I have a website serving a number of different PHP based applications. Some of them natively serve their own gzipped pages, others only serve compressed HTML but uncompressed .css and .js files, others don't compress anything. So far I have a per /location/ Nginx gzip configuration. Gzip is globally disabled and is explicitly enabled per location. The apps I need or want to be compressed by Nginx simply have this: location /blah/ { include my_gzip_custom.conf } That conf file works great for all but one web app that serves a weird mix of compressed and uncompressed pages / related files. What happens if Nginx gets served by PHP-FPM a compressed page and maybe 1 compressed .css file + 5 uncompressed .css files and 6 .js? Does it "spend time and resources" re-compressing the already compressed streams? Does it wrap the gzipped compressed streams inside its own compressed streams? (I admit my ignorance about if nested compressed streams are even supported by web standards). Or does it - hopefully - detect an incoming stream "magic numbers" or headers and transparently skips re-compression / wrapping? That is, may I get an efficient output where Nginx only compresses the specific page-included files (.css etc) that required it and passes-through the already compressed contents? If not, is it possible to implement such a mechanism by means of configuration and how? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242802,242802#msg-242802 From nginx-forum at nginx.us Thu Sep 12 11:07:29 2013 From: nginx-forum at nginx.us (ochronus) Date: Thu, 12 Sep 2013 07:07:29 -0400 Subject: Reverse proxy deleting ETag header from response Message-ID: <8d58da074a90218a434ccf2c9a7dee44.NginxMailingListEnglish@forum.nginx.org> I have a simple config that proxies to/from a django app: upstream django_app{ server 127.0.0.1:4567; } server { listen 80; server_name xxxxxxxxx; location / { proxy_pass_header Set-Cookie; proxy_pass_header ETag; proxy_pass http://django_app; } } My problem is that nginx deletes the ETag header from the response even if I specify proxy_pass_header ETag. The upstream server does return the correct headers: curl -v http://127.0.0.1:4567/ * About to connect() to 127.0.0.1 port 4567 (#0) * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 4567 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 127.0.0.1:4567 > Accept: */* > < HTTP/1.1 200 OK < Server: gunicorn/0.17.2 < Date: Thu, 12 Sep 2013 11:06:18 GMT < Connection: close < Transfer-Encoding: chunked < ETag: 495e0dc4-923c-4a99-8957-e6bbbc89cf5a < Content-Type: text/html; charset=utf-8 < Cache-Control: Cache-Control:private, must-revalidate, proxy-revalidate Any ideas? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242807,242807#msg-242807 From mdounin at mdounin.ru Thu Sep 12 11:54:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Sep 2013 15:54:13 +0400 Subject: Compressed page streams detection and optimization In-Reply-To: References: Message-ID: <20130912115413.GS20921@mdounin.ru> Hello! On Thu, Sep 12, 2013 at 06:07:05AM -0400, dfumagalli wrote: [...] > What happens if Nginx gets served by PHP-FPM a compressed page and maybe 1 > compressed .css file + 5 uncompressed .css files and 6 .js? [...] > Or does it - hopefully - detect an incoming stream "magic numbers" or > headers and transparently skips re-compression / wrapping? Yes, it checks if a response is already compressed based on the Content-Encoding header, and only compresses the response if it's not. It's usually enough to just configure/enable gzip at http{} or server{} level. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Sep 12 11:58:59 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Sep 2013 15:58:59 +0400 Subject: Reverse proxy deleting ETag header from response In-Reply-To: <8d58da074a90218a434ccf2c9a7dee44.NginxMailingListEnglish@forum.nginx.org> References: <8d58da074a90218a434ccf2c9a7dee44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130912115859.GT20921@mdounin.ru> Hello! On Thu, Sep 12, 2013 at 07:07:29AM -0400, ochronus wrote: [...] > My problem is that nginx deletes the ETag header from the response even if I > specify proxy_pass_header ETag. > The upstream server does return the correct headers: The ETag header is removed if nginx changes a response returned. That is, if you have gzip, gunzip, sub, addition, ssi or xslt filters applied to responses returned. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Sep 12 12:07:00 2013 From: nginx-forum at nginx.us (ochronus) Date: Thu, 12 Sep 2013 08:07:00 -0400 Subject: Reverse proxy deleting ETag header from response In-Reply-To: <20130912115859.GT20921@mdounin.ru> References: <20130912115859.GT20921@mdounin.ru> Message-ID: <312c932ca6ea752ed35073b8d8aae23c.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for the quick response! Indeed, I had gzip turned on globally. This is still strange though, is there no way to pass the Etag header from the upstream in this case? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242807,242810#msg-242810 From mdounin at mdounin.ru Thu Sep 12 12:46:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Sep 2013 16:46:01 +0400 Subject: Reverse proxy deleting ETag header from response In-Reply-To: <312c932ca6ea752ed35073b8d8aae23c.NginxMailingListEnglish@forum.nginx.org> References: <20130912115859.GT20921@mdounin.ru> <312c932ca6ea752ed35073b8d8aae23c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130912124601.GU20921@mdounin.ru> Hello! On Thu, Sep 12, 2013 at 08:07:00AM -0400, ochronus wrote: > Thanks a lot for the quick response! > Indeed, I had gzip turned on globally. > This is still strange though, is there no way to pass the Etag header from > the upstream in this case? The ETag header used in it's strong from means that it must be changed whenever bits of an entity are changed. This basically means that ETag headers have to be removed by filters which change a response content. Weak ETags may be preserved, though they are not supported by nginx yet. Mostly because Last-Modified is good enough alternative to week ETags. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Sep 12 13:06:03 2013 From: nginx-forum at nginx.us (ochronus) Date: Thu, 12 Sep 2013 09:06:03 -0400 Subject: Reverse proxy deleting ETag header from response In-Reply-To: <20130912124601.GU20921@mdounin.ru> References: <20130912124601.GU20921@mdounin.ru> Message-ID: <6d65acea6ec91fe61a710526152c8acb.NginxMailingListEnglish@forum.nginx.org> Thank you again for your exhaustive answer. I think I won't drop HAProxy so soon ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242807,242812#msg-242812 From nginx-forum at nginx.us Thu Sep 12 14:35:37 2013 From: nginx-forum at nginx.us (dfumagalli) Date: Thu, 12 Sep 2013 10:35:37 -0400 Subject: Compressed page streams detection and optimization In-Reply-To: <20130912115413.GS20921@mdounin.ru> References: <20130912115413.GS20921@mdounin.ru> Message-ID: Thank you very much! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242802,242813#msg-242813 From contact at jpluscplusm.com Thu Sep 12 15:23:48 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 12 Sep 2013 16:23:48 +0100 Subject: Reverse proxy deleting ETag header from response In-Reply-To: <6d65acea6ec91fe61a710526152c8acb.NginxMailingListEnglish@forum.nginx.org> References: <20130912124601.GU20921@mdounin.ru> <6d65acea6ec91fe61a710526152c8acb.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 12 Sep 2013 14:06, "ochronus" wrote: > > Thank you again for your exhaustive answer. > I think I won't drop HAProxy so soon ;) As an aside, I'm pretty sure HAP doesn't do /anything/ meaningful with etags as it doesn't examine response bodies' content, nor does it cache. Very happy to be told otherwise, but you may merely be seeing unintended consequences if HAP seems to be doing something more appropriate with your upstream's replies. HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 12 16:24:38 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 12 Sep 2013 12:24:38 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <36a80d4442eaf2387ac139c25396d951.NginxMailingListEnglish@forum.nginx.org> Updates: - SSL Client Authentication - BREACH - incorporated suggestions from the list http://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/ regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242815#msg-242815 From vbart at nginx.com Thu Sep 12 16:58:31 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Sep 2013 20:58:31 +0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <36a80d4442eaf2387ac139c25396d951.NginxMailingListEnglish@forum.nginx.org> References: <174e4940ddd543bf94dea50a97a7a1df.NginxMailingListEnglish@forum.nginx.org> <36a80d4442eaf2387ac139c25396d951.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201309122058.31530.vbart@nginx.com> On Thursday 12 September 2013 20:24:38 mex wrote: > Updates: > > - SSL Client Authentication > - BREACH > - incorporated suggestions from the list > > > http://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/ > In your section about BREACH requirements: - User-Data transfered via GET/POST - parameters actually wrong statement. The right one is: - Reflect user-input in HTTP response bodies (from breachattack.com) wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Sep 12 18:36:55 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 12 Sep 2013 14:36:55 -0400 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: <201309122058.31530.vbart@nginx.com> References: <201309122058.31530.vbart@nginx.com> Message-ID: Hi Valentin, > > In your section about BREACH requirements: > correct(ed) thanx mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242818#msg-242818 From nginx-forum at nginx.us Thu Sep 12 20:53:36 2013 From: nginx-forum at nginx.us (mpnally) Date: Thu, 12 Sep 2013 16:53:36 -0400 Subject: Problem with double slashes in urls with uwsgi and sockets Message-ID: I have python app running under uwsgi with nginx. I'm using unix sockets to connect nginx to uwsgi. My python app is receiving single slashes in urls that originally had a double slash. For example, if the url is http://localhost:3000/a//b , my program gets passed a path_info with a/b instead of a//b. My location definition looks like this: location / { include uwsgi_params; uwsgi_pass unix:///tmp/siteserver.socket; } Your help greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242823,242823#msg-242823 From francis at daoine.org Thu Sep 12 23:23:15 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Sep 2013 00:23:15 +0100 Subject: Problem with double slashes in urls with uwsgi and sockets In-Reply-To: References: Message-ID: <20130912232315.GO19345@craic.sysops.org> On Thu, Sep 12, 2013 at 04:53:36PM -0400, mpnally wrote: Hi there, > I have python app running under uwsgi with nginx. I'm using unix sockets to > connect nginx to uwsgi. My python app is receiving single slashes in urls > that originally had a double slash. For example, if the url is > http://localhost:3000/a//b , my program gets passed a path_info with a/b > instead of a//b. uwsgi_params (probably) contains the line uwsgi_param PATH_INFO $document_uri; $document_uri is described at http://nginx.org/en/docs/http/ngx_http_core_module.html#variables Following descriptions and links from there will bring you to the merge_slashes directive. http://nginx.org/r/merge_slashes Set that to "off" if you want contiguous slashes in the request not to become a single slash in this server{} bock. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Sep 13 12:48:55 2013 From: nginx-forum at nginx.us (aldem) Date: Fri, 13 Sep 2013 08:48:55 -0400 Subject: query part included in location match after rewrite (bug or feature?) Message-ID: <2d47d3048b079871bb12fe1e4e96dc9e.NginxMailingListEnglish@forum.nginx.org> Hi, According to documentation, "location" matches only against URI path, ignoring query string. However, after "rewrite", when using variables containing "?" character (like $request_uri for illustration), query becomes part of $uri: location /src/ { rewrite ^ /dst$request_uri; } location /dst/ { # At this point, $uri contains query part from /src, like /dst/src/?arg=val add_header Content-Type text/plain; return 200 "$request_uri $uri $args"; } For request like "/src/?arg=val" the output will be: /src/?arg=val /dst/src/?arg=val arg=val Thus, $uri (and matched part) contains query, and $args *also* contains query (inherited from original request). All together, this may lead to quite unexpected results in some configurations. So, my question is - is this expected behavior (just undocumented) or is a bug? To me it looks like a bug - allowing matching anything past "?" in location and making it part of $uri. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242829,242829#msg-242829 From francis at daoine.org Fri Sep 13 13:19:36 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Sep 2013 14:19:36 +0100 Subject: query part included in location match after rewrite (bug or feature?) In-Reply-To: <2d47d3048b079871bb12fe1e4e96dc9e.NginxMailingListEnglish@forum.nginx.org> References: <2d47d3048b079871bb12fe1e4e96dc9e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130913131936.GQ19345@craic.sysops.org> On Fri, Sep 13, 2013 at 08:48:55AM -0400, aldem wrote: Hi there, > According to documentation, "location" matches only against URI path, > ignoring query string. However, after "rewrite", when using variables > containing "?" character (like $request_uri for illustration), query becomes > part of $uri: My reading of this is that you must give rewrite an explicit ? to mark the query string, so a ? within a variable *is* part of $uri if it comes before a ?. If you want to have your own query string, or to lose the original query string during the rewrite, you must use an explicit ?. > So, my question is - is this expected behavior (just undocumented) or is a > bug? To me it looks like a bug - allowing matching anything past "?" in > location and making it part of $uri. ? is a valid character in a normalised uri. ? is special in an escaped uri, since it marks the query string. "rewrite" creates a normalised uri (at least, when given variables to expand). I'd say "expected", unless there's an example where "rewrite" does not create a normalised uri. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Sep 13 13:48:05 2013 From: nginx-forum at nginx.us (aldem) Date: Fri, 13 Sep 2013 09:48:05 -0400 Subject: query part included in location match after rewrite (bug or feature?) In-Reply-To: <20130913131936.GQ19345@craic.sysops.org> References: <20130913131936.GQ19345@craic.sysops.org> Message-ID: Well, this is where I am lost a bit - documentation only says "replacement string", and from my understanding this includes possible expansion of variables (like everywhere else), and it doesn't mention (or I couldn't find, at least) that rewrite target is normalized URI (or that variables could be processed differently). Probably, this is the only case where expansion is processed differently from literal values. On the other hand, if rewrite target expects a normalized URI, then mangling with arguments there should not be possible at all, as there could be mixture of normalized and escaped values in single string. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242829,242831#msg-242831 From francis at daoine.org Fri Sep 13 14:24:57 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Sep 2013 15:24:57 +0100 Subject: query part included in location match after rewrite (bug or feature?) In-Reply-To: References: <20130913131936.GQ19345@craic.sysops.org> Message-ID: <20130913142457.GR19345@craic.sysops.org> On Fri, Sep 13, 2013 at 09:48:05AM -0400, aldem wrote: Hi there, reading through ngx_http_rewrite_module.c and thinking about it some more, I believe I was wrong in my previous mail. I tried to simplify too much. > Well, this is where I am lost a bit - documentation only says "replacement > string", and from my understanding this includes possible expansion of > variables (like everywhere else), and it doesn't mention (or I couldn't > find, at least) that rewrite target is normalized URI (or that variables > could be processed differently). Probably, this is the only case where > expansion is processed differently from literal values. The only true documentation for what the current version does is in the directory called "src". Anything else is someone's interpretation. (In many cases, the interpretation is correct; but there are often reasons why some edge cases are not documented outside of the source.) My understanding is that rewrite will populate $uri with its "replacement" argument up to the first ?, and will populate $args after that if appropriate. While populating, it expands $variables. For internal-to-nginx use, $uri and $args now have their values. For external-to-nginx use, $uri and $args are concatenated with a separating ? if appropriate, and it is up to whatever reads the complete string to decide what to do with it. > On the other hand, if rewrite target expects a normalized URI, then mangling > with arguments there should not be possible at all, as there could be > mixture of normalized and escaped values in single string. The general philosophy of the nginx configuration seems to be that common things are straightforward, and uncommon things are no more than a module or a local patch away. And also, the administrator is trusted to know what they are doing, so it is their responsibility to properly handle unescaped and escaped values. In the specific case of escaping, nginx in general can't guess what the user wants. It still looks like NOTABUG to me; but I'm not the one who wrote it. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Sep 13 14:26:24 2013 From: nginx-forum at nginx.us (mpnally) Date: Fri, 13 Sep 2013 10:26:24 -0400 Subject: Problem with double slashes in urls with uwsgi and sockets In-Reply-To: <20130912232315.GO19345@craic.sysops.org> References: <20130912232315.GO19345@craic.sysops.org> Message-ID: Thanks for excellent reply, Francis. I have not had time yet to try it, but this looks like it will solve my problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242823,242833#msg-242833 From nginx-forum at nginx.us Fri Sep 13 14:51:09 2013 From: nginx-forum at nginx.us (aldem) Date: Fri, 13 Sep 2013 10:51:09 -0400 Subject: query part included in location match after rewrite (bug or feature?) In-Reply-To: <20130913142457.GR19345@craic.sysops.org> References: <20130913142457.GR19345@craic.sysops.org> Message-ID: <4e073cd8cfe5017ee717eba468dd7a48.NginxMailingListEnglish@forum.nginx.org> Francis, thank you for your time and looking through the source :) Though I still consider this issue as a "bug" (either in documentation or in consistence), what really matters is that understanding how it works is very helpful, and you did the perfect job explaining this. Hopefully, someone, someday will describe this behavior in the documentation - some things in nginx are not exactly intuitive... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242829,242835#msg-242835 From jeroenooms at gmail.com Sat Sep 14 00:56:51 2013 From: jeroenooms at gmail.com (Jeroen Ooms) Date: Fri, 13 Sep 2013 17:56:51 -0700 Subject: request body and client_body_buffer_size Message-ID: Is it correct that when $content_length > client_body_buffer_size, then $request_body == "" ? If so this would be worth documenting at request_body. I am using: proxy_cache_methods POST; proxy_cache_key "$request_method$request_uri$request_body"; Which works for small requests, but for large requests clients got very strange results due to $request_body being empty and hence getting false cache hits for completely different form posts. Is there something available like $body_hash that can be used as a caching key even for large request bodies? Or alternatively, how would I configure nginx to not cache requests when content_length is larger than client_body_buffer_size? From nginx-forum at nginx.us Sat Sep 14 05:42:49 2013 From: nginx-forum at nginx.us (etrader) Date: Sat, 14 Sep 2013 01:42:49 -0400 Subject: How to serve subdomain from subfolder of the domain root? Message-ID: <7d1cf59e0a474e8d41af3d7897757620.NginxMailingListEnglish@forum.nginx.org> In a server as server { server_name domain.com *.domain.com root /var/www/$server_name; } is it possible to set locations for subdomains based on subfolders of the $server_name ? location matching sub1.domain.com { serving from /var/www/$server_name/sub1 } Currently, I am using a server for each subdomain, but when the number of subdomains increases, maintaining numerous servers becomes messy. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242841,242841#msg-242841 From francis at daoine.org Sat Sep 14 08:57:56 2013 From: francis at daoine.org (Francis Daly) Date: Sat, 14 Sep 2013 09:57:56 +0100 Subject: How to serve subdomain from subfolder of the domain root? In-Reply-To: <7d1cf59e0a474e8d41af3d7897757620.NginxMailingListEnglish@forum.nginx.org> References: <7d1cf59e0a474e8d41af3d7897757620.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130914085756.GS19345@craic.sysops.org> On Sat, Sep 14, 2013 at 01:42:49AM -0400, etrader wrote: Hi there, > server { > server_name domain.com *.domain.com > root /var/www/$server_name; > } http://nginx.org/r/server_name Look for "Named captures". > is it possible to set locations for subdomains based on subfolders of the > $server_name ? > > location matching sub1.domain.com { > serving from /var/www/$server_name/sub1 > } "location" doesn't match the hostname used. You could use a "map" to enumerate the hostnames you consider valid, with a default value for anything else; and use that in "root". > Currently, I am using a server for each subdomain, but when the number of > subdomains increases, maintaining numerous servers becomes messy. If *every* valid hostname is handled equivalently, you could get away with it being all in one server block. But you may want to look into "include", or using an external-to-nginx template system to turn your starting system into an nginx.conf with multiple server blocks, if different hostnames are handled differently. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Sat Sep 14 13:49:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Sep 2013 17:49:22 +0400 Subject: request body and client_body_buffer_size In-Reply-To: References: Message-ID: <20130914134922.GC29076@mdounin.ru> Hello! On Fri, Sep 13, 2013 at 05:56:51PM -0700, Jeroen Ooms wrote: > Is it correct that when $content_length > client_body_buffer_size, > then $request_body == "" ? If so this would be worth documenting at > request_body. Yes, it's intended behaviour. If a request body is larger than client_body_buffer_size, it's written to disk and not available in memory, hence no $request_body. The limitation is more or less obvious, and it's also explicitly documented here in the $r->request_body() method documentation: http://nginx.org/en/docs/http/ngx_http_perl_module.html#methods It might worth adding some short reference into $request_body variable description though. > I am using: > > proxy_cache_methods POST; > proxy_cache_key "$request_method$request_uri$request_body"; > > Which works for small requests, but for large requests clients got > very strange results due to $request_body being empty and hence > getting false cache hits for completely different form posts. > > Is there something available like $body_hash that can be used as a > caching key even for large request bodies? Or alternatively, how > would I configure nginx to not cache requests when content_length > is larger than client_body_buffer_size? The proxy_no_cache $request_body_file; should do the trick, see http://nginx.org/r/proxy_no_cache. -- Maxim Dounin http://nginx.org/en/donation.html From jeroenooms at gmail.com Sat Sep 14 18:15:39 2013 From: jeroenooms at gmail.com (Jeroen Ooms) Date: Sat, 14 Sep 2013 11:15:39 -0700 Subject: request body and client_body_buffer_size In-Reply-To: References: Message-ID: @ Maxim Dounin Thanks! This is very helpful. I have also set: client_body_buffer_size 1m; Could this setting have any side effects? I am not expecting too many large POST request. From what I read, client_body_buffer_size is actually the maximum amount of memory allocated. Does this mean that for small requests (e.g. without a body) there is no additional overhead introduced by raising this value? From mdounin at mdounin.ru Sat Sep 14 19:23:22 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 14 Sep 2013 23:23:22 +0400 Subject: request body and client_body_buffer_size In-Reply-To: References: Message-ID: <20130914192322.GF29076@mdounin.ru> Hello! On Sat, Sep 14, 2013 at 11:15:39AM -0700, Jeroen Ooms wrote: > @ Maxim Dounin > > Thanks! This is very helpful. I have also set: > > client_body_buffer_size 1m; > > Could this setting have any side effects? I am not expecting too many > large POST request. From what I read, client_body_buffer_size is > actually the maximum amount of memory allocated. Does this mean that > for small requests (e.g. without a body) there is no additional > overhead introduced by raising this value? Yes, it's not allocated if there is no request body, and only needed buffer is allocated if a request body is known to be smaller. On the other hand, it can be used as a DoS vector if an attacker is allowed to open many connections but you can't afford them all to allocate client_body_buffer_size buffer. Additionally, using such a big $request_body in proxy_cache_key implies various overheads. In particular, proxy_buffer_size should be set big enough to be able to contain cache header with a key. Not even talking about reading/writing cache files with such keys. -- Maxim Dounin http://nginx.org/en/donation.html From paulnpace at gmail.com Sat Sep 14 21:06:25 2013 From: paulnpace at gmail.com (Paul N. Pace) Date: Sat, 14 Sep 2013 14:06:25 -0700 Subject: [DOC] Guide to Nginx + SSL + SPDY In-Reply-To: References: <201309122058.31530.vbart@nginx.com> Message-ID: Dear Mr. or Ms. mex, Could you please contact me paulnpace at gmail.com regarding this very useful guide you have created? I have some specific questions and I would also like to help out, if I can. Thanks! Paul On Thu, Sep 12, 2013 at 11:36 AM, mex wrote: > Hi Valentin, > >> >> In your section about BREACH requirements: >> > > correct(ed) > > > thanx > > mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242672,242818#msg-242818 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From agentzh at gmail.com Sun Sep 15 22:16:21 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 15 Sep 2013 15:16:21 -0700 Subject: [ANN] ngx_openresty mainline version 1.4.2.7 released Message-ID: Hello folks! I am happy to announce that the new mainline version of ngx_openresty, 1.4.2.7, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (mainline) release, 1.4.2.5: * upgraded LuaNginxModule to 0.8.9. * bugfix: the Nginx core does not send a default status line for the 101 status code. now we construct one ourselves in this case. * bugfix: nil "pool" option values led to errors in tcpsock:connect(). * bugfix: tcpsock:receive(0) could hang until new data arrived or the timeout error happened; now it always returns an empty string immediately. this new behaviour diverges from the LuaSocket library though. * bugfix: for SPDY requests, we (temporarily) disable the Lua API functions ngx.location.capture, ngx.location.capture_multi, and ngx.req.socket, which are known to have problems in SPDY mode. The SPDY compatibility issue will eventually get fixed in the near future. * refactor: removed our own "ctx->headers_sent" field because we should use Nginx core's "r->header_sent" instead. * upgraded EchoNginxModule to 0.48. * refactor: removed our own "ctx->headers_sent" field because we should use Nginx core's "r->header_sent" instead. * bugfix: "./configure" now always removes existing Makefile before trying to generate a new one. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 Have fun! -agentzh From andrew.s.martin at gmail.com Mon Sep 16 05:32:57 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Mon, 16 Sep 2013 00:32:57 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: <20130911143117.GL19345@craic.sysops.org> References: <1378731640.9145735.kgt69elq@zebra-x17.ukr.net> <20130910164657.GF19345@craic.sysops.org> <20130911124311.GI19345@craic.sysops.org> <20130911143117.GL19345@craic.sysops.org> Message-ID: Francis, I ended up coming up with this solution: map $request_uri $request_basename { ~/(?[^/?]*)(?:\?|$) $captured_request_basename; } server [ .... try_files $uri $uri/ @rewrite; location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9004; fastcgi_index index.php; include fastcgi_params; } location @rewrite { if (!-e $request_filename) { rewrite ^/article/[^/]+$ /index.php?title=$request_basename last; } } .... } This allows me to visit this URL in the browser: http://mysite.com/article/my_test_page and have nginx internally load this page: http://mysite.com/index.php?title=my_test_page Thanks again for all of your help, Andrew On Wed, Sep 11, 2013 at 9:31 AM, Francis Daly wrote: > On Wed, Sep 11, 2013 at 08:32:09AM -0500, Andrew Martin wrote: > > Hi there, > > > Using the similar statement "try_files $uri $uri/ /index.php;", if I > visit > > this URL: > > http://mysite.com/index.php?title=my_test_page > > then the URL is rewritten to this, but it just loads the contents of > > index.php (without the title variable): > > When you say "the contents of", you mean "the unprocessed php", yes? > > In nginx, one request is handled in one location. > > You must put all of the configuration that you wish to apply to a request, > in the one location that handles that request. > > This means that if you have a "location = /index.php", then if the > request is for /index.php, your "location ~ php" will not be used. > > In this case, can I suggest that you use a slightly different approach, > to keep separate things separate? > > Something like (untested): > > try_files $uri $uri/ @fallback; > > location = /index.php { > # the browser requested /index.php > if ($arg_title != "") { > return 302 http://mysite.com/$arg_title; > } > fastcgi_pass 127.0.0.1:9000; > include fastcgi_params; > } > > location @fallback { > # the browser requested something that is not on the nginx filesystem > fastcgi_pass 127.0.0.1:9000; > fastcgi_param SCRIPT_FILENAME $document_root/index.php; > fastcgi_param QUERY_STRING title=$uri; > include fastcgi_params; > } > > but there will be rough edges there -- you may want the "include" line at > the start of the @fallback stanza rather than the end, and you probably > will need to tweak the QUERY_STRING param passed (to remove the leading / > on $uri, most likely). > > You can test to find what exactly is needed. > > Probably enabling the debug log until you are certain that you understand > what nginx is doing, will be helpful. > > > The php code is a custom page, not a pre-built CMS. It is doing an ajax > > call to load the content, but should be functionally-equivalent to this: > > > > > > > > > > > if (isset($_GET['title'])) { > > include($_GET['title'] . ".html"); > > } else { > > include("home.html"); > > } > > ?> > > > > > > > > Can I suggest that you test initially with that exact code, rather > than something that should be functionally equivalent? Keep it simple, > so that you can see at what point in the sequence things first fail. > > (Even better: just do something like "print $_GET['title']", so you can > see exactly what you received. After that works right, add the complexity.) > > If "ajax" means "makes further http requests of nginx", then you'll need > to make sure that the first one works before trying the subsequent ones. > > > If I go to this page: > > http://mysite.com/index.php?title=my_test_page > > I would like the client's browser to instead show this URL: > > http://mysite.com/my_test_page > > The "return" should do that. > > What you now also want is for, when you go to > http://mysite.com/my_test_page, that nginx knows to tell the fastcgi > server to process the index.php page with certain arguments. For that, > you must configure nginx correctly. > > Keep a very clear picture of how the browser, nginx, and the fastcgi/php > server communicate, and you'll be able to work out where things are not > doing what you expect them to do; and then you may be able to see what > to change, to get the to do what you want. > > > Does this help clarify what I am looking for? > > Building your own php framework, and making it work with nginx. > > If you search for how every other framework does this, you may find > useful hints as to how your one will work. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar at xlrs.de Mon Sep 16 12:11:05 2013 From: ar at xlrs.de (Axel) Date: Mon, 16 Sep 2013 14:11:05 +0200 Subject: multiple listen directives Message-ID: Hi all, I want to build an active/active cluster and therefore configure nginx to listen on multiple IP adresses. I read http://nginx.org/en/doc/http/ngx_http_core_module.html#listen but found no information if I can use multiple listen directives for ssl without activating the interface. Can I configure it like this? And are there any problems I will need to think of? Server A: 192.168.178.20 Server B: 192.168.178.30 Server C: 192.168.178.40 server { listen 192.168.178.20:443 ssl; listen 192.168.178.30:443 ssl; listen 192.168.178.40:443 ssl; server_name my.example.com; ... ... } Regards, Axel From igor at sysoev.ru Mon Sep 16 12:17:16 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 16 Sep 2013 16:17:16 +0400 Subject: multiple listen directives In-Reply-To: References: Message-ID: <076AB4E2-FE01-4653-B08C-6B2B04511883@sysoev.ru> On Sep 16, 2013, at 16:11 , Axel wrote: > Hi all, > > I want to build an active/active cluster and therefore configure nginx to listen on multiple IP adresses. > > I read http://nginx.org/en/doc/http/ngx_http_core_module.html#listen > but found no information if I can use multiple listen directives for ssl without activating the interface. > > Can I configure it like this? And are there any problems I will need to think of? > > Server A: 192.168.178.20 > Server B: 192.168.178.30 > Server C: 192.168.178.40 > > > server { > listen 192.168.178.20:443 ssl; > listen 192.168.178.30:443 ssl; > listen 192.168.178.40:443 ssl; > server_name my.example.com; > ... > ... > } > > Regards, Axel > Add wildcard: listen *:443 ssl; listen 192.168.178.20:443 ssl; listen 192.168.178.30:443 ssl; listen 192.168.178.40:443 ssl; -- Igor Sysoev http://nginx.com From ar at xlrs.de Mon Sep 16 13:14:41 2013 From: ar at xlrs.de (Axel) Date: Mon, 16 Sep 2013 15:14:41 +0200 Subject: multiple listen directives In-Reply-To: <076AB4E2-FE01-4653-B08C-6B2B04511883@sysoev.ru> References: <076AB4E2-FE01-4653-B08C-6B2B04511883@sysoev.ru> Message-ID: Hi Igor, Am 16.09.2013 14:17, schrieb Igor Sysoev: >> >> server { >> listen 192.168.178.20:443 ssl; >> listen 192.168.178.30:443 ssl; >> listen 192.168.178.40:443 ssl; >> server_name my.example.com; >> ... >> ... >> } >> >> Regards, Axel >> > > Add wildcard: > > listen *:443 ssl; > listen 192.168.178.20:443 ssl; > listen 192.168.178.30:443 ssl; > listen 192.168.178.40:443 ssl; thanks for your reply. What happens when i add a wildcard this way? I found http://trac.nginx.org/nginx/ticket/187 As far as I understand this wildcard enables nginx to bind on one of the given interfaces? Or is this a "catch-all" for the server block? Do I need to add this wildcard to any enabled vHost? Regards, Axel From eric at kodeplay.com Mon Sep 16 13:18:48 2013 From: eric at kodeplay.com (eric eric) Date: Mon, 16 Sep 2013 18:48:48 +0530 Subject: PHP, AJAX: large data being truncated on nginx Message-ID: Hello, My PHP installation is configured to accept 8M of post data and use to 128M of physical memory. I have also configured following in nginx conf client_body_buffer_size 100k; client_header_buffer_size 100k; client_max_body_size 16m; large_client_header_buffers 200 100k; When i try to post large data, some data from the last last textboxes is truncated There are over 200 textboxes on the page. And i am getting the data in post only for 185 text boxes. Thanks in advance. -- Thanks & Regards, Eric Fargose -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 16 13:29:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Sep 2013 17:29:31 +0400 Subject: PHP, AJAX: large data being truncated on nginx In-Reply-To: References: Message-ID: <20130916132931.GE57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 06:48:48PM +0530, eric eric wrote: > Hello, > > My PHP installation is configured to accept 8M of post data and use to 128M > of physical memory. > > I have also configured following in nginx conf > > client_body_buffer_size 100k; > client_header_buffer_size 100k; > client_max_body_size 16m; > large_client_header_buffers 200 100k; > > When i try to post large data, some data from the last last textboxes is > truncated > There are over 200 textboxes on the page. > And i am getting the data in post only for 185 text boxes. I would suppose it's a php configuration problem, see here: http://www.php.net/manual/en/info.configuration.php#ini.max-input-vars -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 16 13:45:43 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Sep 2013 17:45:43 +0400 Subject: multiple listen directives In-Reply-To: References: <076AB4E2-FE01-4653-B08C-6B2B04511883@sysoev.ru> Message-ID: <20130916134543.GG57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 03:14:41PM +0200, Axel wrote: > Hi Igor, > > Am 16.09.2013 14:17, schrieb Igor Sysoev: > >> > >>server { > >> listen 192.168.178.20:443 ssl; > >> listen 192.168.178.30:443 ssl; > >> listen 192.168.178.40:443 ssl; > >> server_name my.example.com; > >>... > >>... > >>} > >> > >>Regards, Axel > >> > > > >Add wildcard: > > > > listen *:443 ssl; > > listen 192.168.178.20:443 ssl; > > listen 192.168.178.30:443 ssl; > > listen 192.168.178.40:443 ssl; > > thanks for your reply. > > What happens when i add a wildcard this way? I found > http://trac.nginx.org/nginx/ticket/187 > As far as I understand this wildcard enables nginx to bind on one of > the given interfaces? Or is this a "catch-all" for the server block? If a wildcard listen on a port is used anywhere in configuration, nginx will listen on a wildcard address and won't try to bind to individual addresses. In particular, this allows to configure listen directives with addresses not currently present on a host. Some details can be found at http://nginx.org/r/listen, see "bind" parameter description. > Do I need to add this wildcard to any enabled vHost? No, you don't. It's enough to add it anywhere in the configuration. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Sep 16 16:22:17 2013 From: nginx-forum at nginx.us (monkeybrain) Date: Mon, 16 Sep 2013 12:22:17 -0400 Subject: Status code 000 in the logs Message-ID: <5f8c6888ef50b3481b0b3b1ee4921d22.NginxMailingListEnglish@forum.nginx.org> What does status code 000 mean in the Nginx logs? I have the following in the config: location = /get-img.pl { limit_req zone=slow burst=10; proxy_pass http://service_perl; } get-img.pl generates an image, writes it to disk and then returns "X-Accel-Redirect" to the image. Everything seems to be working just fine, I see delayed responses because of limit_req in the logs (with code 200). Sometimes there are responses with code 503 for "greedy" clients that exceed the burst=10 parameter. All as expected. However, occasionally I see a bunch of requests (around 5 to 20 within a few seconds of each other) with status code 000 and 0 for the body size in the logs. They are always from the same IP address for the entire bunch, so I'm guessing it's requests that already went through internal "X-Accel-Redirect" redirect and then something happened. Connection aborted? Something else? Why not status code 503? That's where you come in with a helpful explanation. :) In case it is relevant: nginx version: nginx/1.4.2 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-54) TLS SNI support disabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242893,242893#msg-242893 From mdounin at mdounin.ru Mon Sep 16 16:46:58 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Sep 2013 20:46:58 +0400 Subject: Status code 000 in the logs In-Reply-To: <5f8c6888ef50b3481b0b3b1ee4921d22.NginxMailingListEnglish@forum.nginx.org> References: <5f8c6888ef50b3481b0b3b1ee4921d22.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130916164658.GJ57081@mdounin.ru> Hello! On Mon, Sep 16, 2013 at 12:22:17PM -0400, monkeybrain wrote: > What does status code 000 mean in the Nginx logs? I have the following in > the config: > > location = /get-img.pl > { > limit_req zone=slow burst=10; > proxy_pass http://service_perl; > } > > get-img.pl generates an image, writes it to disk and then returns > "X-Accel-Redirect" to the image. Everything seems to be working just fine, I > see delayed responses because of limit_req in the logs (with code 200). > Sometimes there are responses with code 503 for "greedy" clients that exceed > the burst=10 parameter. All as expected. > > However, occasionally I see a bunch of requests (around 5 to 20 within a few > seconds of each other) with status code 000 and 0 for the body size in the > logs. They are always from the same IP address for the entire bunch, so I'm > guessing it's requests that already went through internal "X-Accel-Redirect" > redirect and then something happened. Connection aborted? Something else? > Why not status code 503? That's where you come in with a helpful > explanation. :) It happens if client closes connection whily waiting for limit_req delay, and correct code to log is 499 (client closed request). This is already fixed in 1.5.3 with the following commit: http://hg.nginx.org/nginx/rev/aadfadd5af2b -- Maxim Dounin http://nginx.org/en/donation.html From ben at indietorrent.org Mon Sep 16 17:19:05 2013 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 16 Sep 2013 13:19:05 -0400 Subject: How to disable output buffering with PHP and nginx Message-ID: <52373D89.2000906@indietorrent.org> Hello, In an effort to resolve a different issue, I am trying to confirm that my stack is capable of servicing at least two simultaneous requests for a given PHP script. In an effort to confirm this, I have written a simple PHP script that runs for a specified period of time and outputs the number of seconds elapsed since the script was started. ----------------------------------------------- References: <076AB4E2-FE01-4653-B08C-6B2B04511883@sysoev.ru> <20130916134543.GG57081@mdounin.ru> Message-ID: Thanks. This helps a lot. Regards, Axel Am 16.09.2013 15:45, schrieb Maxim Dounin: > Hello! > > On Mon, Sep 16, 2013 at 03:14:41PM +0200, Axel wrote: > >> Hi Igor, >> >> Am 16.09.2013 14:17, schrieb Igor Sysoev: >> >> >> >>server { >> >> listen 192.168.178.20:443 ssl; >> >> listen 192.168.178.30:443 ssl; >> >> listen 192.168.178.40:443 ssl; >> >> server_name my.example.com; >> >>... >> >>... >> >>} >> >> >> >>Regards, Axel >> >> >> > >> >Add wildcard: >> > >> > listen *:443 ssl; >> > listen 192.168.178.20:443 ssl; >> > listen 192.168.178.30:443 ssl; >> > listen 192.168.178.40:443 ssl; >> >> thanks for your reply. >> >> What happens when i add a wildcard this way? I found >> http://trac.nginx.org/nginx/ticket/187 >> As far as I understand this wildcard enables nginx to bind on one of >> the given interfaces? Or is this a "catch-all" for the server block? > > If a wildcard listen on a port is used anywhere in configuration, > nginx will listen on a wildcard address and won't try to bind to > individual addresses. In particular, this allows to configure > listen directives with addresses not currently present on a host. > > Some details can be found at http://nginx.org/r/listen, see "bind" > parameter description. > >> Do I need to add this wildcard to any enabled vHost? > > No, you don't. It's enough to add it anywhere in the > configuration. From nginx-forum at nginx.us Tue Sep 17 07:13:56 2013 From: nginx-forum at nginx.us (Aleus Essentia) Date: Tue, 17 Sep 2013 03:13:56 -0400 Subject: How can I use get arguments of POST-request in own module? Message-ID: <86a395f67240d0fa7e4443c865f84420.NginxMailingListEnglish@forum.nginx.org> Hello! I develop some module and I need recieve POST-request's arguments. GET-request and HTTP-header easy to use but how may POST-request were used I don't know... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242908,242908#msg-242908 From nginx-forum at nginx.us Tue Sep 17 07:15:47 2013 From: nginx-forum at nginx.us (Aleus Essentia) Date: Tue, 17 Sep 2013 03:15:47 -0400 Subject: How can I use get arguments of POST-request in own module? In-Reply-To: <86a395f67240d0fa7e4443c865f84420.NginxMailingListEnglish@forum.nginx.org> References: <86a395f67240d0fa7e4443c865f84420.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0c2654d96ca637a4e8ed957e0d6f408f.NginxMailingListEnglish@forum.nginx.org> Some mistake in subject: *"use arguments" without "get". Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242908,242909#msg-242909 From mdounin at mdounin.ru Tue Sep 17 13:43:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 17:43:26 +0400 Subject: nginx-1.5.5 Message-ID: <20130917134326.GU57081@mdounin.ru> Changes with nginx 1.5.5 17 Sep 2013 *) Change: now nginx assumes HTTP/1.0 by default if it is not able to detect protocol reliably. *) Feature: the "disable_symlinks" directive now uses O_PATH on Linux. *) Feature: now nginx uses EPOLLRDHUP events to detect premature connection close by clients if the "epoll" method is used. *) Bugfix: in the "valid_referers" directive if the "server_names" parameter was used. *) Bugfix: the $request_time variable did not work in nginx/Windows. *) Bugfix: in the "image_filter" directive. Thanks to Lanshun Zhou. *) Bugfix: OpenSSL 1.0.1f compatibility. Thanks to Piotr Sikora. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Sep 17 15:11:05 2013 From: nginx-forum at nginx.us (Claudio) Date: Tue, 17 Sep 2013 11:11:05 -0400 Subject: Broken pipe while sending request to upstream Message-ID: Hi. I've set up nginx as a proxy for a jetty service. Works nicely, most of the time, except ... when issuing a (somewhat) larger POST request to some entity which is protected by HTTP Basic access authentication. The web app responds with a 401 immediately, probably closing the connection right away: 127.0.0.1 - - [17/Sep/2013:14:17:38 +0000] "POST /scm/blub?cmd=unbundle HTTP/1.0" 401 1412 But nginx gratuitously insists on sending all the data, which fails eventually: 2013/09/17 16:17:38 [error] 22873#0: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 192.168.2.8, server: test.int, request: "POST /scm/blub?cmd=unbundle HTTP/1.1", upstream: "http://127.0.0.1:8082/scm/blub?cmd=unbundle", host: "test.int" I also tried different config options like enabling sendfile, increasing buffer and timeout sizes, but it didn't help. Is there some way to make this work? Is this a bug? I'm using Ubuntu 12.04 LTS on linux with nginx 1.1.19-1ubuntu0.2. Thanks for any help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242919,242919#msg-242919 From mdounin at mdounin.ru Tue Sep 17 15:39:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 Sep 2013 19:39:31 +0400 Subject: Broken pipe while sending request to upstream In-Reply-To: References: Message-ID: <20130917153931.GA57081@mdounin.ru> Hello! On Tue, Sep 17, 2013 at 11:11:05AM -0400, Claudio wrote: > Hi. > > I've set up nginx as a proxy for a jetty service. Works nicely, most of the > time, except > > ... when issuing a (somewhat) larger POST request to some entity which is > protected by HTTP Basic access authentication. > > The web app responds with a 401 immediately, probably closing the connection > right away: > > 127.0.0.1 - - [17/Sep/2013:14:17:38 +0000] "POST /scm/blub?cmd=unbundle > HTTP/1.0" 401 1412 > > But nginx gratuitously insists on sending all the data, which fails > eventually: > > 2013/09/17 16:17:38 [error] 22873#0: *1 writev() failed (32: Broken pipe) > while sending request to upstream, client: 192.168.2.8, server: test.int, > request: "POST /scm/blub?cmd=unbundle HTTP/1.1", upstream: > "http://127.0.0.1:8082/scm/blub?cmd=unbundle", host: "test.int" > > I also tried different config options like enabling sendfile, increasing > buffer and timeout sizes, but it didn't help. > > Is there some way to make this work? Is this a bug? As long as a connection is closed before nginx is able to get a response - it looks like a problem in your backend. Normally such connections need lingering close to make sure a client has a chance to read a response. -- Maxim Dounin http://nginx.org/en/donation.html From kworthington at gmail.com Wed Sep 18 02:12:57 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Sep 2013 22:12:57 -0400 Subject: nginx-1.5.5 In-Reply-To: <20130917134326.GU57081@mdounin.ru> References: <20130917134326.GU57081@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.5 for Windows http://goo.gl/TglvA0 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Sep 17, 2013 at 9:43 AM, Maxim Dounin wrote: > Changes with nginx 1.5.5 17 Sep > 2013 > > *) Change: now nginx assumes HTTP/1.0 by default if it is not able to > detect protocol reliably. > > *) Feature: the "disable_symlinks" directive now uses O_PATH on Linux. > > *) Feature: now nginx uses EPOLLRDHUP events to detect premature > connection close by clients if the "epoll" method is used. > > *) Bugfix: in the "valid_referers" directive if the "server_names" > parameter was used. > > *) Bugfix: the $request_time variable did not work in nginx/Windows. > > *) Bugfix: in the "image_filter" directive. > Thanks to Lanshun Zhou. > > *) Bugfix: OpenSSL 1.0.1f compatibility. > Thanks to Piotr Sikora. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Sep 18 02:34:38 2013 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 17 Sep 2013 22:34:38 -0400 Subject: nginx-1.5.5 (corrected URL) Message-ID: Hello Nginx users, (corrected link below) Now available: Nginx 1.5.5 for Windows http://goo.gl/mHIAeL (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Sep 17, 2013 at 10:12 PM, Kevin Worthington wrote: > Hello Nginx users, > > Now available: Nginx 1.5.5 for Windows http://goo.gl/TglvA0 (32-bit and > 64-bit versions) > > These versions are to support legacy users who are already using Cygwin > based builds of Nginx. Officially supported native Windows binaries are at > nginx.org. > > Announcements are also available via my Twitter stream ( > http://twitter.com/kworthington), if you prefer to receive updates that > way. > > Thank you, > Kevin > -- > Kevin Worthington > kworthington *@* (gmail] [dot} {com) > http://kevinworthington.com/ > http://twitter.com/kworthington > > > > On Tue, Sep 17, 2013 at 9:43 AM, Maxim Dounin wrote: > >> Changes with nginx 1.5.5 17 Sep >> 2013 >> >> *) Change: now nginx assumes HTTP/1.0 by default if it is not able to >> detect protocol reliably. >> >> *) Feature: the "disable_symlinks" directive now uses O_PATH on Linux. >> >> *) Feature: now nginx uses EPOLLRDHUP events to detect premature >> connection close by clients if the "epoll" method is used. >> >> *) Bugfix: in the "valid_referers" directive if the "server_names" >> parameter was used. >> >> *) Bugfix: the $request_time variable did not work in nginx/Windows. >> >> *) Bugfix: in the "image_filter" directive. >> Thanks to Lanshun Zhou. >> >> *) Bugfix: OpenSSL 1.0.1f compatibility. >> Thanks to Piotr Sikora. >> >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 18 06:52:39 2013 From: nginx-forum at nginx.us (Claudio) Date: Wed, 18 Sep 2013 02:52:39 -0400 Subject: Broken pipe while sending request to upstream In-Reply-To: <20130917153931.GA57081@mdounin.ru> References: <20130917153931.GA57081@mdounin.ru> Message-ID: Hi Maxim. Maxim Dounin Wrote: ------------------------------------------------------- > As long as a connection is closed before nginx is able to get a > response - it looks like a problem in your backend. Normally such > connections need lingering close to make sure a client has a chance > to read a response. Thanks for your prompt response! I read an illustrative description about the lingering close here (https://mail-archives.apache.org/mod_mbox/httpd-dev/199701.mbox/%3CPine.BSF.3.95.970121215226.12598N-100000 at alive.ampr.ab.ca%3E) and now better understand the problem per se. What I'm not getting straight is why nginx does not see the response (assuming it really was sent off by the server). Does nginx try to read data from the connection while sending or when an error occurs during send? (Sorry for those dumb questions, but obviously I don't have the slightest idea how nginx works...) According to jetty's documentation, "Jetty attempts to gently close all TCP/IP connections with proper half close semantics, so a linger timeout should not be required and thus the default is -1." Would this actually enable nginx to see the response from the server? Or is it really necessary to fully read the body before sending a response, as indicated by this (http://kudzia.eu/b/2012/01/switching-from-apache2-to-nginx-as-reverse-proxy/) post I found? I don't know for sure about the client, but nginx is talking via HTTP/1.1 to the web app. Is it possible to enable the Expect: 100-continue method for this connection so that nginx sees the early response? Alternatively, is it possible to work around this problem? Could I define some rules to the extent that say, if it is a POST request to that specific location _without_ an "Authorization" header present, strip the request body, set the content-length to 0 and then forward this request? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242919,242937#msg-242937 From shahzaib.cb at gmail.com Wed Sep 18 09:43:42 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 18 Sep 2013 14:43:42 +0500 Subject: How to get rid of 204 Intercepted requests of IDM ? Message-ID: Hello, We're running a video streaming website with nginx for streaming videos. The issue we're having is, whenever user try to stream any mp4 or flv file from our site using chrome, IDM(Internet Download Manager) comes between the browser and webserver and directly downloads that file and doesn't player let stream that file. We've found the following line in google inspect element tool: 204 Intercepted by the IDM Advanced Integration Can someone guide me that how can i block those 204 intercept requests in order to prevent IDM for bothering the player ? There are so many users complaining for it. :( You can also check the sample attach file regarding the issue. Help would be highly appreciated. Thanks Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IDM-issue.png Type: image/png Size: 419117 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Sep 18 10:17:05 2013 From: nginx-forum at nginx.us (michelem) Date: Wed, 18 Sep 2013 06:17:05 -0400 Subject: sending POSTs to backend Message-ID: <3edbd7f9c9b71dd6cb7fe56005e1e651.NginxMailingListEnglish@forum.nginx.org> Hello folks, I use nginx in front of a django/fastcgi application, and we serve a subset of the URLs of this webapp statically. I use try_files to determine what's served statically and what goes to the backend. Some of these URLs, however, contain forms and are at the same time source and targets for them. The former can be served statically, while the latter should hit the backend. A simple "if ($request_method) { directive; }" would work, but I find no effective "directive" to send the request to the backend. Our basic setup looks like this: server { root /site/static_files/; try_files $uri $uri/ $uri/index.html @appsrv; if ($request_method = POST) { # solution 1: try_files /var/emtpy/.foobar @appsrv # solution 2: fastcgi_pass 1.2.3.4:1234; # all of the above fail: directive not allowed here # solution 3: return 321; (+ define error_page 321 in server {}) # I can't get the above handle correctly good and bad responses from the appsrv } location @appsrv { include /usr/local/etc/nginx/fastcgi_params_django; fastcgi_pass 1.2.3.4:1234; } } any suggestion to divert POST requests to the given named location? cheers michele Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242944,242944#msg-242944 From pigmej at gmail.com Wed Sep 18 13:09:35 2013 From: pigmej at gmail.com (Jedrzej Nowak) Date: Wed, 18 Sep 2013 15:09:35 +0200 Subject: ngx_lua + proxy_next_upstream Message-ID: Hello, I have configured: 1. ngx_lua as rewrite_by_lua_file 2. lua returns upstream 3. Nginx connects to the upstream Works perfectly. The question is how can I archive proxy_next_upstream. Preferably I would like to return to lua with a error reason. If the only way is to return several servers in upstream from lua, how to do so ? I'm currently setting ngx.var.upstream and then proxy_pass http://$upstream. I suppose the simplest method would be to set the $upstream in correct format. But what about my preferably method? Cheers, J?drzej Nowak -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 18 13:22:37 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Sep 2013 17:22:37 +0400 Subject: Broken pipe while sending request to upstream In-Reply-To: References: <20130917153931.GA57081@mdounin.ru> Message-ID: <20130918132237.GF57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 02:52:39AM -0400, Claudio wrote: > Hi Maxim. > > Maxim Dounin Wrote: > ------------------------------------------------------- > > As long as a connection is closed before nginx is able to get a > > response - it looks like a problem in your backend. Normally such > > connections need lingering close to make sure a client has a chance > > to read a response. > > Thanks for your prompt response! > > I read an illustrative description about the lingering close here > (https://mail-archives.apache.org/mod_mbox/httpd-dev/199701.mbox/%3CPine.BSF.3.95.970121215226.12598N-100000 at alive.ampr.ab.ca%3E) > and now better understand the problem per se. > > What I'm not getting straight is why nginx does not see the response > (assuming it really was sent off by the server). Does nginx try to read data > from the connection while sending or when an error occurs during send? > (Sorry for those dumb questions, but obviously I don't have the slightest > idea how nginx works...) > > According to jetty's documentation, "Jetty attempts to gently close all > TCP/IP connections with proper half close semantics, so a linger timeout > should not be required and thus the default is -1." Would this actually > enable nginx to see the response from the server? Or is it really necessary > to fully read the body before sending a response, as indicated by this > (http://kudzia.eu/b/2012/01/switching-from-apache2-to-nginx-as-reverse-proxy/) > post I found? While sending a request nginx monitors a connection to see if there are any data available from an upstream (using an event method configured), and if they are - it reads the data (and handles as a normal http response). It doesn't try to read anything if it got a write error though, and an error will be reported if a backend closes the connection before nginx was able to see there are data available for reading. Playing with settings like sendfile, sendfile_max_chunk, as well as tcp buffers configured in your OS might be helpful if your backend closes connection to early. The idea is to make sure nginx won't be blocked for a long time in sendfile or so, and will be able to detect data available for reading before an error occurs during writing. > I don't know for sure about the client, but nginx is talking via HTTP/1.1 to > the web app. Is it possible to enable the Expect: 100-continue method for > this connection so that nginx sees the early response? No, "Expect: 100-continue" isn't something nginx is able to use while talking to backends. > Alternatively, is it possible to work around this problem? Could I define > some rules to the extent that say, if it is a POST request to that specific > location _without_ an "Authorization" header present, strip the request > body, set the content-length to 0 and then forward this request? You can, but I would rather recommend digging deeper in what goes on and fixing the root cause. -- Maxim Dounin http://nginx.org/en/donation.html From mat999 at gmail.com Wed Sep 18 13:36:32 2013 From: mat999 at gmail.com (SplitIce) Date: Wed, 18 Sep 2013 23:06:32 +0930 Subject: Limit IP req/s excl bots Message-ID: Does anyone know if there is any truth to this blog post: http://gadelkareem.com/2012/03/25/limit-requests-per-ip-on-nginx-using-httplimitzonemodule-and-httplimitreqmodule-except-whitelist/ And if so where about in the code its implemented? I was trying to find out if its possible to use a map to control limit req/s limits but I still cant find the code that does what that article says is possible. Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From mat999 at gmail.com Wed Sep 18 13:45:54 2013 From: mat999 at gmail.com (SplitIce) Date: Wed, 18 Sep 2013 23:15:54 +0930 Subject: Limit IP req/s excl bots In-Reply-To: References: Message-ID: I also think I found that 0 length keys wont be stored. This isn't in the documentation but if I understand the code correctly they aren't, is this correct? I could use a map then and get considerable flexibility. On Wed, Sep 18, 2013 at 11:06 PM, SplitIce wrote: > Does anyone know if there is any truth to this blog post: > http://gadelkareem.com/2012/03/25/limit-requests-per-ip-on-nginx-using-httplimitzonemodule-and-httplimitreqmodule-except-whitelist/ > > And if so where about in the code its implemented? I was trying to find > out if its possible to use a map to control limit req/s limits but I still > cant find the code that does what that article says is possible. > > Regards, > Mathew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Sep 18 13:55:34 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Sep 2013 17:55:34 +0400 Subject: Limit IP req/s excl bots In-Reply-To: References: Message-ID: <201309181755.34355.vbart@nginx.com> On Wednesday 18 September 2013 17:45:54 SplitIce wrote: > I also think I found that 0 length keys wont be stored. This isn't in the > documentation but if I understand the code correctly they aren't, is this > correct? [...] Just a quote from the documentation (http://nginx.org/r/limit_req_zone): | The key is any non-empty value of the specified variable (empty values | are not accounted). wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Sep 18 13:56:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Sep 2013 17:56:00 +0400 Subject: Limit IP req/s excl bots In-Reply-To: References: Message-ID: <20130918135600.GI57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 11:15:54PM +0930, SplitIce wrote: > I also think I found that 0 length keys wont be stored. This isn't in the > documentation but if I understand the code correctly they aren't, is this > correct? It's in the documentation, see http://nginx.org/r/limit_req_zone: : ... The key is any non-empty value of the specified variable (empty : values are not accounted). ... > On Wed, Sep 18, 2013 at 11:06 PM, SplitIce wrote: > > > Does anyone know if there is any truth to this blog post: > > http://gadelkareem.com/2012/03/25/limit-requests-per-ip-on-nginx-using-httplimitzonemodule-and-httplimitreqmodule-except-whitelist/ > > > > And if so where about in the code its implemented? I was trying to find > > out if its possible to use a map to control limit req/s limits but I still > > cant find the code that does what that article says is possible. The configuration provided in the post linked doesn't do any whitelisting. Though it's mostly trivial to introduce one using a map{}. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Sep 18 14:22:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Sep 2013 18:22:21 +0400 Subject: sending POSTs to backend In-Reply-To: <3edbd7f9c9b71dd6cb7fe56005e1e651.NginxMailingListEnglish@forum.nginx.org> References: <3edbd7f9c9b71dd6cb7fe56005e1e651.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130918142221.GK57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 06:17:05AM -0400, michelem wrote: > Hello folks, > > I use nginx in front of a django/fastcgi application, and we serve a subset > of the URLs of this webapp statically. > > I use try_files to determine what's served statically and what goes to the > backend. > > Some of these URLs, however, contain forms and are at the same time source > and targets for them. The former can be served statically, while the latter > should hit the backend. > > A simple "if ($request_method) { directive; }" would work, but I find no > effective "directive" to send the request to the backend. > > Our basic setup looks like this: > > server { > root /site/static_files/; > try_files $uri $uri/ $uri/index.html @appsrv; > > if ($request_method = POST) { > # solution 1: try_files /var/emtpy/.foobar @appsrv This isn't going to work. > # solution 2: fastcgi_pass 1.2.3.4:1234; This should work as long as you'll move the check into some location. It also might result in various problems in case of more ifs added, see http://wiki.nginx.org/IfIsEvil. Adding a "location /" is a good idea anyway. > # all of the above fail: directive not allowed here > # solution 3: return 321; (+ define error_page 321 in server {}) > # I can't get the above handle correctly good and bad responses from > the appsrv You mean - other error_pages doesn't work for you then? Try recursive_error_pages, see here: http://nginx.org/r/recursive_error_pages > } > > location @appsrv { > include /usr/local/etc/nginx/fastcgi_params_django; > fastcgi_pass 1.2.3.4:1234; > } > } > > any suggestion to divert POST requests to the given named location? See above. I would recommend "return + error_page" variant. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Sep 18 14:38:05 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Sep 2013 18:38:05 +0400 Subject: How to get rid of 204 Intercepted requests of IDM ? In-Reply-To: References: Message-ID: <20130918143805.GL57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 02:43:42PM +0500, shahzaib shahzaib wrote: > Hello, > > We're running a video streaming website with nginx for streaming > videos. The issue we're having is, whenever user try to stream any mp4 or > flv file from our site using chrome, IDM(Internet Download Manager) comes > between the browser and webserver and directly downloads that file and > doesn't player let stream that file. > > We've found the following line in google inspect element tool: > > 204 Intercepted by the IDM Advanced Integration > > Can someone guide me that how can i block those 204 intercept requests in > order to prevent IDM for bothering the player ? There are so many users > complaining for it. :( > > > You can also check the sample attach file regarding the issue. > > Help would be highly appreciated. Have you tried asking these guys directly? http://www.internetdownloadmanager.com/register/new_faq/bi19.html The "tell us" link seems to be a nop, but almost all FAQ pages have contact forms. -- Maxim Dounin http://nginx.org/en/donation.html From fry.kun at gmail.com Wed Sep 18 23:47:01 2013 From: fry.kun at gmail.com (Konstantin Svist) Date: Wed, 18 Sep 2013 16:47:01 -0700 Subject: Fedora 19 performs poorly Message-ID: <523A3B75.4080805@gmail.com> Note: same message also posted in -ru list I'm trying to migrate from Fedora 14 (2.6.35.14-97.fc14.x86_64) to Fedora 19 (3.10.11-200.fc19.x86_64) worker_processes 40; events { worker_connections 8000; use epoll; } http { proxy_headers_hash_max_size 8096; # default was: 512 proxy_headers_hash_bucket_size 128; # default was: 64 variables_hash_max_size 1024; variables_hash_bucket_size 128; default_type application/octet-stream; sendfile on; keepalive_timeout 65; charset utf-8; resolver 127.0.0.1; # necessary for dynamic upstream resolution limit_req_log_level warn; proxy_intercept_errors on; server { listen 80; location = /service_check_nginx { echo "nginx"; } } } Symptoms: * ab -n1000000 -c1000 'http://localhost/service_check_nginx' (parallel 4x, i.e. 4000 simultaneous connections) says that some requests take >3sec and/or fails to complete * netstat -s: ... 1269313 times the listen queue of a socket overflowed 1282868 SYNs to LISTEN sockets dropped ... grows with speed of 2000/sec, sometimes more * CPU load is not spread out evenly between workers: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27671 nobody 20 0 418.6m 149.2m 1.1m S 51.1 0.1 0:00.77 nginx: worker process 27685 nobody 20 0 418.6m 149.2m 1.1m S 39.7 0.1 0:01.76 nginx: worker process 27661 nobody 20 0 418.6m 149.2m 1.1m S 22.7 0.1 0:01.63 nginx: worker process 27688 nobody 20 0 418.6m 149.2m 1.2m S 22.7 0.1 0:01.90 nginx: worker process 27697 nobody 20 0 418.6m 149.2m 1.1m S 17.0 0.1 0:00.95 nginx: worker process 27666 nobody 20 0 422.0m 152.3m 1.1m R 7.6 0.1 0:01.50 nginx: worker process 27701 nobody 20 0 419.3m 149.7m 1.1m S 1.9 0.1 0:00.01 nginx: worker process 27650 nobody 20 0 418.6m 149.9m 1.8m S 0.0 0.1 0:03.52 nginx: worker process 27658 nobody 20 0 418.6m 149.2m 1.1m S 0.0 0.1 0:01.30 nginx: worker process 27664 nobody 20 0 419.0m 149.5m 1.1m S 0.0 0.1 0:01.86 nginx: worker process 27669 nobody 20 0 418.6m 149.2m 1.1m S 0.0 0.1 0:00.35 nginx: worker process 27672 nobody 20 0 418.6m 149.2m 1.1m S 0.0 0.1 0:00.23 nginx: worker process meanwhile, on F14: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30042 nobody 20 0 1224m 955m 17m R 41.2 0.4 523:24.45 nginx: worker process 30038 nobody 20 0 1224m 955m 17m S 39.4 0.4 522:24.30 nginx: worker process 30047 nobody 20 0 1224m 955m 17m R 39.4 0.4 520:35.36 nginx: worker process 30053 nobody 20 0 1224m 955m 17m R 39.4 0.4 520:42.77 nginx: worker process 30027 nobody 20 0 1224m 955m 17m S 37.6 0.4 520:55.20 nginx: worker process 30036 nobody 20 0 1224m 955m 18m R 37.6 0.4 525:26.07 nginx: worker process 30037 nobody 20 0 1224m 955m 17m S 37.6 0.4 523:59.09 nginx: worker process 30041 nobody 20 0 1224m 955m 17m R 37.6 0.4 529:31.88 nginx: worker process 30049 nobody 20 0 1224m 954m 17m R 37.6 0.4 519:58.73 nginx: worker process By the way, if I set worker_connections 800 (worker_processes 40) and start up "ab -c1000 ..." -- then ab fails with an error (on F19 only; fine on F14). What should I do next? From nginx-forum at nginx.us Wed Sep 18 23:50:13 2013 From: nginx-forum at nginx.us (scianos) Date: Wed, 18 Sep 2013 19:50:13 -0400 Subject: HTTP_X_FORWARDED_FOR being truncated/prefixed with a comma and no IP for some requests Message-ID: <033118ce63395afd059df65a1a6d23d7.NginxMailingListEnglish@forum.nginx.org> Hi - I have confirmed an unusual situation in which it appears the leading address is being stripped from x-forwarded-for headers passed on to downstream hosts (running Apache in this case) on very specific requests. I haven't been able to determine a pattern that triggers the event. Has anyone else experienced this issue/seen anything similar? I've been managing nginx-based services for some time and this is the first event in which I've seen this behavior; I am at a loss. Kind regards, Stu Technical info: Example: HTTP_X_FORWARDED_FOR=, 10.2.8.141 SERVER_ADDR=10.5.7.112 REMOTE_ADDR=10.4.7.114 - note the leading "," on the x_forwarded_for header and the missing leading IP. Configuration example: location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://backend1/; } } Version info: nginx version: nginx/1.2.6 (Ubuntu) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.2.6/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.2.6/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.2.6/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.2.6/debian/modules/nginx-dav-ext-module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242970,242970#msg-242970 From mat999 at gmail.com Thu Sep 19 00:00:36 2013 From: mat999 at gmail.com (SplitIce) Date: Thu, 19 Sep 2013 09:30:36 +0930 Subject: Limit IP req/s excl bots In-Reply-To: <20130918135600.GI57081@mdounin.ru> References: <20130918135600.GI57081@mdounin.ru> Message-ID: My bad I missed that sentence in the docco. Thanks, Ill be using map. That blog lead me down the wrong track. On Wed, Sep 18, 2013 at 11:26 PM, Maxim Dounin wrote: > Hello! > > On Wed, Sep 18, 2013 at 11:15:54PM +0930, SplitIce wrote: > > > I also think I found that 0 length keys wont be stored. This isn't in the > > documentation but if I understand the code correctly they aren't, is this > > correct? > > It's in the documentation, see http://nginx.org/r/limit_req_zone: > > : ... The key is any non-empty value of the specified variable (empty > : values are not accounted). ... > > > On Wed, Sep 18, 2013 at 11:06 PM, SplitIce wrote: > > > > > Does anyone know if there is any truth to this blog post: > > > > http://gadelkareem.com/2012/03/25/limit-requests-per-ip-on-nginx-using-httplimitzonemodule-and-httplimitreqmodule-except-whitelist/ > > > > > > And if so where about in the code its implemented? I was trying to find > > > out if its possible to use a map to control limit req/s limits but I > still > > > cant find the code that does what that article says is possible. > > The configuration provided in the post linked doesn't do any > whitelisting. Though it's mostly trivial to introduce one using a > map{}. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 19 00:58:41 2013 From: nginx-forum at nginx.us (bryndole) Date: Wed, 18 Sep 2013 20:58:41 -0400 Subject: Access to live limit_conn values? Message-ID: <88ab5623db244ef8b1d01c61d26ebd97.NginxMailingListEnglish@forum.nginx.org> I'm looking for a way get the value of the current number of connections in a given limit_conn zone.There is no variable access that I can find. I'm considering hacking this into the stub_status module, so that all of the active connection zones are displayed with the current number of active connections. Is this value accessible via the lua or perl interface? Having a way to just add this value to the log line might be sufficient. Bryn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242973,242973#msg-242973 From shahzaib.cb at gmail.com Thu Sep 19 06:50:16 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Sep 2013 11:50:16 +0500 Subject: How to get rid of 204 Intercepted requests of IDM ? In-Reply-To: <20130918143805.GL57081@mdounin.ru> References: <20130918143805.GL57081@mdounin.ru> Message-ID: Hello Maxim, We didn't ask yet. I'll back to you with details. Thanks Shahzaib On Wed, Sep 18, 2013 at 7:38 PM, Maxim Dounin wrote: > Hello! > > On Wed, Sep 18, 2013 at 02:43:42PM +0500, shahzaib shahzaib wrote: > > > Hello, > > > > We're running a video streaming website with nginx for streaming > > videos. The issue we're having is, whenever user try to stream any mp4 or > > flv file from our site using chrome, IDM(Internet Download Manager) comes > > between the browser and webserver and directly downloads that file and > > doesn't player let stream that file. > > > > We've found the following line in google inspect element tool: > > > > 204 Intercepted by the IDM Advanced Integration > > > > Can someone guide me that how can i block those 204 intercept requests in > > order to prevent IDM for bothering the player ? There are so many users > > complaining for it. :( > > > > > > You can also check the sample attach file regarding the issue. > > > > Help would be highly appreciated. > > Have you tried asking these guys directly? > http://www.internetdownloadmanager.com/register/new_faq/bi19.html > > The "tell us" link seems to be a nop, but almost all FAQ pages > have contact forms. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Sep 19 07:18:04 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Sep 2013 12:18:04 +0500 Subject: How to get rid of 204 Intercepted requests of IDM ? In-Reply-To: References: <20130918143805.GL57081@mdounin.ru> Message-ID: Isn't there anything i can do on server side ? On Thu, Sep 19, 2013 at 11:50 AM, shahzaib shahzaib wrote: > Hello Maxim, > > We didn't ask yet. I'll back to you with details. > > Thanks > Shahzaib > > > On Wed, Sep 18, 2013 at 7:38 PM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Sep 18, 2013 at 02:43:42PM +0500, shahzaib shahzaib wrote: >> >> > Hello, >> > >> > We're running a video streaming website with nginx for >> streaming >> > videos. The issue we're having is, whenever user try to stream any mp4 >> or >> > flv file from our site using chrome, IDM(Internet Download Manager) >> comes >> > between the browser and webserver and directly downloads that file and >> > doesn't player let stream that file. >> > >> > We've found the following line in google inspect element tool: >> > >> > 204 Intercepted by the IDM Advanced Integration >> > >> > Can someone guide me that how can i block those 204 intercept requests >> in >> > order to prevent IDM for bothering the player ? There are so many users >> > complaining for it. :( >> > >> > >> > You can also check the sample attach file regarding the issue. >> > >> > Help would be highly appreciated. >> >> Have you tried asking these guys directly? >> http://www.internetdownloadmanager.com/register/new_faq/bi19.html >> >> The "tell us" link seems to be a nop, but almost all FAQ pages >> have contact forms. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javi at lavandeira.net Thu Sep 19 08:54:58 2013 From: javi at lavandeira.net (Javier Lavandeira) Date: Thu, 19 Sep 2013 17:54:58 +0900 Subject: How to get rid of 204 Intercepted requests of IDM ? In-Reply-To: References: <20130918143805.GL57081@mdounin.ru> Message-ID: <740ABD24-E7D4-4A47-A43B-4E489E74488B@lavandeira.net> Hi Shahzaib, I don't think there's much to do on the server side. This is a third party client-side application that is taking over downloads. Maybe you could somewhat mitigate the problem by adding/changing some MIME types, but probably that wouldn't help much in this case. I would tell your users to uninstall IDM. Regards, -- Javi Lavandeira Twitter: @javilm Blog: http://www.lavandeira.net/blog/ On Sep 19, 2013, at 4:18 PM, shahzaib shahzaib wrote: > Isn't there anything i can do on server side ? > > > On Thu, Sep 19, 2013 at 11:50 AM, shahzaib shahzaib wrote: > Hello Maxim, > > We didn't ask yet. I'll back to you with details. > > Thanks > Shahzaib > > > On Wed, Sep 18, 2013 at 7:38 PM, Maxim Dounin wrote: > Hello! > > On Wed, Sep 18, 2013 at 02:43:42PM +0500, shahzaib shahzaib wrote: > > > Hello, > > > > We're running a video streaming website with nginx for streaming > > videos. The issue we're having is, whenever user try to stream any mp4 or > > flv file from our site using chrome, IDM(Internet Download Manager) comes > > between the browser and webserver and directly downloads that file and > > doesn't player let stream that file. > > > > We've found the following line in google inspect element tool: > > > > 204 Intercepted by the IDM Advanced Integration > > > > Can someone guide me that how can i block those 204 intercept requests in > > order to prevent IDM for bothering the player ? There are so many users > > complaining for it. :( > > > > > > You can also check the sample attach file regarding the issue. > > > > Help would be highly appreciated. > > Have you tried asking these guys directly? > http://www.internetdownloadmanager.com/register/new_faq/bi19.html > > The "tell us" link seems to be a nop, but almost all FAQ pages > have contact forms. > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Thu Sep 19 09:46:23 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Sep 2013 14:46:23 +0500 Subject: How to get rid of 204 Intercepted requests of IDM ? In-Reply-To: <740ABD24-E7D4-4A47-A43B-4E489E74488B@lavandeira.net> References: <20130918143805.GL57081@mdounin.ru> <740ABD24-E7D4-4A47-A43B-4E489E74488B@lavandeira.net> Message-ID: @Javi, Thanks for explaining things. @Maxim i've also contacted IDM through the link you provided me. I'll let you know about their reply. Regards. Shahzaib On Thu, Sep 19, 2013 at 1:54 PM, Javier Lavandeira wrote: > Hi Shahzaib, > > I don't think there's much to do on the server side. This is a third party > client-side application that is taking over downloads. Maybe you could > somewhat mitigate the problem by adding/changing some MIME types, but > probably that wouldn't help much in this case. > > I would tell your users to uninstall IDM. > > Regards, > > -- > Javi Lavandeira > Twitter: @javilm > * > * > *Blog:* http://www.lavandeira.net/blog/ > > > > On Sep 19, 2013, at 4:18 PM, shahzaib shahzaib > wrote: > > Isn't there anything i can do on server side ? > > > On Thu, Sep 19, 2013 at 11:50 AM, shahzaib shahzaib > wrote: > >> Hello Maxim, >> >> We didn't ask yet. I'll back to you with details. >> >> Thanks >> Shahzaib >> >> >> On Wed, Sep 18, 2013 at 7:38 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Sep 18, 2013 at 02:43:42PM +0500, shahzaib shahzaib wrote: >>> >>> > Hello, >>> > >>> > We're running a video streaming website with nginx for >>> streaming >>> > videos. The issue we're having is, whenever user try to stream any mp4 >>> or >>> > flv file from our site using chrome, IDM(Internet Download Manager) >>> comes >>> > between the browser and webserver and directly downloads that file and >>> > doesn't player let stream that file. >>> > >>> > We've found the following line in google inspect element tool: >>> > >>> > 204 Intercepted by the IDM Advanced Integration >>> > >>> > Can someone guide me that how can i block those 204 intercept requests >>> in >>> > order to prevent IDM for bothering the player ? There are so many users >>> > complaining for it. :( >>> > >>> > >>> > You can also check the sample attach file regarding the issue. >>> > >>> > Help would be highly appreciated. >>> >>> Have you tried asking these guys directly? >>> http://www.internetdownloadmanager.com/register/new_faq/bi19.html >>> >>> The "tell us" link seems to be a nop, but almost all FAQ pages >>> have contact forms. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/en/donation.html >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 19 09:53:42 2013 From: nginx-forum at nginx.us (FooBarWidget) Date: Thu, 19 Sep 2013 05:53:42 -0400 Subject: Timeout serving large requests In-Reply-To: <6c6fa17c3f94b05445fdc41283c701d1.NginxMailingListEnglish@forum.nginx.org> References: <0FDAE934-545D-40ED-9521-41E225FC44A0@sonru.com> <6c6fa17c3f94b05445fdc41283c701d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3c9cd848f3e6431bb2344c00a2bf8797.NginxMailingListEnglish@forum.nginx.org> Hi BrindleFly. My name is Hongli Lai, and I am the CTO of Phusion and main developer behind Phusion Passenger. We noticed this topic moments ago. I apologize for the inconvenience that this problem has caused you, but I'd like to point out that your problem has got absolutely nothing to do with us selling a commercial version. For one, we take pride in writing stable, robust, well-performing software, whether we charge for it or not. The open source version of Phusion Passenger is a fully-featured, mature product that stands on its own, even without Enterprise enhancements. Deliberately making software fail with obscure errors, such as what *appears* to be the case for you, goes against every part of our philosophy and morals. Second, the *only* technical changes in the Enterprise version are as advertised on https://www.phusionpassenger.com/enterprise and as documented in the manual. There are absolutely no technical changes between the open source and the Enterprise version which would cause a difference in behavior like this. We only sell on features, not on core stability. In other words: the problem you're experiencing is most likely just a bug. If that's the case then the bug would exist in both the open source as well as the Enterprise version. Your problem description is most peculiar: you mentioned that compiling out the Phusion Passenger module in Nginx makes things work. However the Phusion Passenger module doesn't do *anything* unless you explicitly set `passenger_enabled` on for that context, and in no way changes Nginx's core behavior. Having said that, I do not exclude the possibility that there are subtle interactions which could still introduce incompatibilities. I would like to invite you, or anybody who experiences similar problems, to contact us through the Phusion Passenger community discussion forum: https://groups.google.com/forum/#!forum/phusion-passenger and to provide more information. If this really is a bug, we'll do our best to fix it, Enterprise user or not. With kind regards, Hongli Lai Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,242983#msg-242983 From nginx-forum at nginx.us Thu Sep 19 09:59:01 2013 From: nginx-forum at nginx.us (FooBarWidget) Date: Thu, 19 Sep 2013 05:59:01 -0400 Subject: Timeout serving large requests In-Reply-To: References: Message-ID: Hi Anatoly. In case you haven't seen my other reply: my name is Hongli Lai, and I am the CTO of Phusion and main developer behind Phusion Passenger. We noticed this topic moments ago. As I've mentioned in my other reply, we take pride in writing stable, robust, well-performing software, whether we charge for it or not. The open source version of Phusion Passenger is a fully-featured, mature product that stands on its own, even without Enterprise enhancements. Deliberately making software fail with obscure errors, such as what appears to be the case for BrindleFly, goes against every part of our philosophy and morals. I can assure you that the problems described in this topic have got absolutely nothing to do with us selling a commercial version. You mentioned "a lot of the same issues on Passenger bugtracker", and "the bug is years old". Can you point out which ones exactly? The described issue bears similarity to some old issues in Phusion Passenger 3 (and indeed, BrindleFly is using version 3). Those issues have been fixed in Phusion Passenger 4, released in May 2013. Even in the open source version. Version 4 is a major improvement and closed many issues on our bug tracker. In the past half year alone we've managed to reduce the number of outstanding issues in our issue tracker by 50%. You see, in Phusion Passenger 3, the core I/O engine was multithreaded and there were certain I/O patterns (which involve file uploads) that it didn't handle correctly. In version 4, the core I/O engine has been entirely rewritten in an evented manner, not unlike how Nginx itself works. All I/O patterns are now handled correctly. I would like to invite you, or anybody who experiences similar problems, to contact us through the Phusion Passenger community discussion forum: https://groups.google.com/forum/#!forum/phusion-passenger and to provide more information. If there is a bug, we'll do our best to fix it, Enterprise user or not. With kind regards, Hongli Lai Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236201,242984#msg-242984 From mdounin at mdounin.ru Thu Sep 19 10:36:23 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 14:36:23 +0400 Subject: HTTP_X_FORWARDED_FOR being truncated/prefixed with a comma and no IP for some requests In-Reply-To: <033118ce63395afd059df65a1a6d23d7.NginxMailingListEnglish@forum.nginx.org> References: <033118ce63395afd059df65a1a6d23d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130919103623.GP57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 07:50:13PM -0400, scianos wrote: > Hi - > > I have confirmed an unusual situation in which it appears the leading > address is being stripped from x-forwarded-for headers passed on to > downstream hosts (running Apache in this case) on very specific requests. I > haven't been able to determine a pattern that triggers the event. > > Has anyone else experienced this issue/seen anything similar? I've been > managing nginx-based services for some time and this is the first event in > which I've seen this behavior; I am at a loss. > > Kind regards, > Stu > > Technical info: > Example: > HTTP_X_FORWARDED_FOR=, 10.2.8.141 SERVER_ADDR=10.5.7.112 > REMOTE_ADDR=10.4.7.114 > - note the leading "," on the x_forwarded_for header and the missing leading > IP. This can easily happen if an original request contains an empty X-Forwarded-For header. See no problem here. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Sep 19 10:57:35 2013 From: nginx-forum at nginx.us (chrisrob) Date: Thu, 19 Sep 2013 06:57:35 -0400 Subject: =?UTF-8?Q?Root_inside_Location_Block=3F_Pitfalls_says_NO=2C_Beginner?= =?UTF-8?Q?=E2=80=99s_Guide_says_YES?= Message-ID: Practically the first two pages I read when starting with nginx were: http://wiki.nginx.org/Pitfalls which says "putting Root inside Location Block is BAD" - don't do it. and http://nginx.org/en/docs/beginners_guide.html which gives this as its example of a config file: The resulting configuration of the server block should look like this: server { location / { root /data/www; } location /images/ { root /data; } } So I'm wondering which is right? Cheers, chrisrob Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242986,242986#msg-242986 From mdounin at mdounin.ru Thu Sep 19 11:08:31 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 15:08:31 +0400 Subject: Fedora 19 performs poorly In-Reply-To: <523A3B75.4080805@gmail.com> References: <523A3B75.4080805@gmail.com> Message-ID: <20130919110830.GR57081@mdounin.ru> Hello! On Wed, Sep 18, 2013 at 04:47:01PM -0700, Konstantin Svist wrote: > Note: same message also posted in -ru list Please see detailed response in nginx-ru@ list. It's actually not a good idea to cross-post. Short version for others here: from the information provided it looks more like a testing problem than a real problem. [...] -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Sep 19 11:28:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Sep 2013 15:28:13 +0400 Subject: =?UTF-8?Q?Re=3A_Root_inside_Location_Block=3F_Pitfalls_says_NO=2C_Beginner?= =?UTF-8?Q?=E2=80=99s_Guide_says_YES?= In-Reply-To: References: Message-ID: <20130919112813.GS57081@mdounin.ru> Hello! On Thu, Sep 19, 2013 at 06:57:35AM -0400, chrisrob wrote: > Practically the first two pages I read when starting with nginx were: > > http://wiki.nginx.org/Pitfalls > > which says "putting Root inside Location Block is BAD" - don't do it. > > and > > http://nginx.org/en/docs/beginners_guide.html > > which gives this as its example of a config file: > > The resulting configuration of the server block should look like this: > > server { > location / { > root /data/www; > } > location /images/ { > root /data; > } > } > > So I'm wondering which is right? As you can see, the example in begginners guide uses _different_ roots for locations configured, and hence it's very different from the example provided at Pitfalls wiki page. Using the "root" directive inside a location block isn't bad per se. It's bad if you repeat it needlessly instead of using single root inside a server{} block. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Sep 19 12:20:07 2013 From: nginx-forum at nginx.us (chrisrob) Date: Thu, 19 Sep 2013 08:20:07 -0400 Subject: =?UTF-8?Q?Re=3A_Root_inside_Location_Block=3F_Pitfalls_says_NO=2C_Beginner?= =?UTF-8?Q?=E2=80=99s_Guide_says_YES?= In-Reply-To: <20130919112813.GS57081@mdounin.ru> References: <20130919112813.GS57081@mdounin.ru> Message-ID: <1c5ad2669fe04c9fe42fb828430a3776.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim, That's pretty obvious I suppose - providing that a root at Server context level is always the default in absence of root in Location Block. I had only been reading the docs for 30 mins, after downloading nginx, and that just jumped out at me. I'd better make sure I read all the docs before posting here again - but I still think "Pitfalls" could be more specific, moan, moan... Chris Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242986,242990#msg-242990 From shahzaib.cb at gmail.com Thu Sep 19 12:27:12 2013 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 19 Sep 2013 17:27:12 +0500 Subject: Send data in fragments with nginx !! Message-ID: Hello, We're using video streaming site with nginx and most of the users download videos via IDM(Internet Download Manager). We want to prevent this downloading by sending videos into fragments so IDM will not be able to download full video instead it'll cut the video into fragments of KBs and that way user will not be able to play video after downloading it in fragments just like dailymotion does. We've also checked connections per ip module of nginx but that is not suitable for our environment. Is there any module nginx support to fulfill our requirements? Pardon me for bad english. Regards Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Sep 9 04:31:37 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 09 Sep 2013 16:31:37 +1200 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: References: Message-ID: <1378701097.2753.308.camel@steve-new> I think you need to do some regexp on the args if ( $args ~ title=([^&]+) { rewrite ^(.*)title=([^&]+).*$ /article/$2? last; } Note... totally untested. Steve On Sun, 2013-09-08 at 23:01 -0500, Andrew Martin wrote: > Hello, > > > I have read through the nginx rewrite documentation and looked at > various examples, but can't figure out how to create a rewrite rule > for the following (if it is possible). I'd like to rewrite the URL of > a php file with a $_GET argument and replace it with just the value of > the $_GET argument. For example, I'd like to > replace /index.php?title=my_example_title with /my_example_title > or /article/my_example_title. I've tried several regular expressions > to match index.php, as well as the $args and $arg_title nginx > variables, but cannot get this working. For example: > rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; > > > > Can anyone provide inside into how to correctly rewrite this type of > URL? > > > Thanks, > > > Andrew > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From agentzh at gmail.com Fri Sep 20 00:34:01 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 19 Sep 2013 17:34:01 -0700 Subject: ngx_lua + proxy_next_upstream In-Reply-To: References: Message-ID: Hello! On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote: > The question is how can I archive proxy_next_upstream. > Preferably I would like to return to lua with a error reason. > If the only way is to return several servers in upstream from lua, how to do > so ? > If you want to return the control back to Lua and let your Lua code do the upstream retries or something, then you should use the ngx.location.capture() API instead to initiate an Nginx subrequest to ngx_proxy: http://wiki.nginx.org/HttpLuaModule#ngx.location.capture Regards, -agentzh From andrew.s.martin at gmail.com Fri Sep 20 04:33:28 2013 From: andrew.s.martin at gmail.com (Andrew Martin) Date: Thu, 19 Sep 2013 23:33:28 -0500 Subject: Rewrite URL to only show value of $_GET argument In-Reply-To: <1378701097.2753.308.camel@steve-new> References: <1378701097.2753.308.camel@steve-new> Message-ID: Steve, Thanks for the suggestion. How would this additional check change the solution I proposed on 9/16? It looks like it would prevent the rewrite from occurring if other arguments (instead of title) were present? Thanks again, Andrew On Sun, Sep 8, 2013 at 11:31 PM, Steve Holdoway wrote: > I think you need to do some regexp on the args > > if ( $args ~ title=([^&]+) { > rewrite ^(.*)title=([^&]+).*$ /article/$2? last; > } > > Note... totally untested. > > Steve > > > On Sun, 2013-09-08 at 23:01 -0500, Andrew Martin wrote: > > Hello, > > > > > > I have read through the nginx rewrite documentation and looked at > > various examples, but can't figure out how to create a rewrite rule > > for the following (if it is possible). I'd like to rewrite the URL of > > a php file with a $_GET argument and replace it with just the value of > > the $_GET argument. For example, I'd like to > > replace /index.php?title=my_example_title with /my_example_title > > or /article/my_example_title. I've tried several regular expressions > > to match index.php, as well as the $args and $arg_title nginx > > variables, but cannot get this working. For example: > > rewrite ^/index\.php?title=(.*)$ http://www.mysite.com/$1 redirect; > > > > > > > > Can anyone provide inside into how to correctly rewrite this type of > > URL? > > > > > > Thanks, > > > > > > Andrew > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 20 07:27:20 2013 From: nginx-forum at nginx.us (Claudio) Date: Fri, 20 Sep 2013 03:27:20 -0400 Subject: Broken pipe while sending request to upstream In-Reply-To: <20130918132237.GF57081@mdounin.ru> References: <20130918132237.GF57081@mdounin.ru> Message-ID: Hello Maxim, thanks a lot for your explanation. I've (kind off) solved the problem for now. I was testing with another proxy in between nginx and the Jetty server to see whether that would behave differently. I just used Twitter's finagle, which is based on Netty and got a few error messages like this: 18.09.2013 11:59:58 com.twitter.finagle.builder.SourceTrackingMonitor handle FATAL: A server service unspecified threw an exception com.twitter.finagle.ChannelClosedException: ChannelException at remote address: localhost/127.0.0.1:8082 at com.twitter.finagle.NoStacktrace(Unknown Source) So I tried to dig deeper on the Jetty side of things. In the end, I just upgraded the web application running inside of Jetty and this solved the problem. Maybe I should make this a reflex: first update everything to the latest version before even trying to understand the problem, but that's not so easy to do in general... Thanks again! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242919,243005#msg-243005 From jen142 at promessage.com Sun Sep 22 17:11:50 2013 From: jen142 at promessage.com (jen142 at promessage.com) Date: Sun, 22 Sep 2013 10:11:50 -0700 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # Message-ID: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> I have a mail server on my lan. It exposes a WebUI over SSL on port:443. It currently only has 1-step, password authentication. I want to add a 2nd layer of authentication, and put that mailserver behind an nginx server that: (1) adds BASIC authentication, and (2) after OK auth, transparently passes traffic to/from the mail server Here's the nginx config I use to do this: ------------------------------------ upstream mail-secure { server mail.mydomain.com:443; } server { server_name passthru.mydomain.com; more_set_headers "Server: Secure WebMail"; listen 1.2.3.4:12345 ssl spdy default_server; root /svr/data/passthru.mydomain.com; access_log /var/log/nginx/passthru.mydomain.com.12345.access.log main; error_log /var/log/nginx/passthru.mydomain.com.12345.error.log error; rewrite_log on; ssl on; include includes/ssl_protocol.conf; ssl_verify_client off; ssl_certificate "/svr/sec/ssl/ComodoCert/mydomain.crt"; ssl_certificate_key "/svr/sec/ssl/ComodoCert/mydomain.key"; add_header Strict-Transport-Security "max-age=315360000; includeSubdomains"; gzip on; gzip_http_version 1.0; gzip_comp_level 6; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_vary on; add_header Vary "Accept-Encoding"; location / { auth_basic "Restricted Remote"; auth_basic_user_file /svr/sec/auth/passwd.basic; proxy_pass https://mail-secure; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }------------------------------------ This works -- mostly. If I visit https://passthru.mydomain.com:12345, I get the Nginx BASIC auth dialog, like you'd expect. If I enter OK credentials, thru to the mail server. Except that the 1st redirection from the server I get is to https://passthru.mydomain.com/h/search?mesg=welcome&init=true which fails because it's at the wrong port. NOTE that there's no ":12345" in the URL. If I simply mod that URL - https://passthru.mydomain.com/h/search?mesg=welcome&init=true - https://passthru.mydomain.com:12345/h/search?mesg=welcome&init=true , adding the port, everything works after that. I can interact with & use the mail server's UI no problem. I suspect I need to pass an additional header, proxy parameter, etc -- but have no clue yet what/which. Any ideas/suggestions what's missing or wrong here? Thanks, Jen From francis at daoine.org Sun Sep 22 20:13:44 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Sep 2013 21:13:44 +0100 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # In-Reply-To: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> References: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> Message-ID: <20130922201344.GC19345@craic.sysops.org> On Sun, Sep 22, 2013 at 10:11:50AM -0700, jen142 at promessage.com wrote: Hi there, untested; and it may depend on exactly who is doing the redirecting, but does replacing this line: > proxy_set_header Host $host; with proxy_set_header Host $host:12345; change how it responds? f -- Francis Daly francis at daoine.org From jen142 at promessage.com Sun Sep 22 20:28:02 2013 From: jen142 at promessage.com (jen142 at promessage.com) Date: Sun, 22 Sep 2013 13:28:02 -0700 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # In-Reply-To: <20130922201344.GC19345@craic.sysops.org> References: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> <20130922201344.GC19345@craic.sysops.org> Message-ID: <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> Hi Francis, On Sun, Sep 22, 2013, at 01:13 PM, Francis Daly wrote: > untested; and it may depend on exactly who is doing the redirecting, > but does replacing this line: > > > proxy_set_header Host $host; > > with > > proxy_set_header Host $host:12345; > > change how it responds? That sounded promising, but, unfortunately ... no. Same beahvior -- initial reponse is without the portnum; add it manually, and all's well. Jen From jen142 at promessage.com Sun Sep 22 20:32:27 2013 From: jen142 at promessage.com (jen142 at promessage.com) Date: Sun, 22 Sep 2013 13:32:27 -0700 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # In-Reply-To: <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> References: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> <20130922201344.GC19345@craic.sysops.org> <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> Message-ID: <1379881947.27959.25096605.1A745F8E@webmail.messagingengine.com> I lied! Sort of ... After making your suggested change, and restarting nginx, no change. BUT, after a machine reboot -- it now works as expected. Actis like something got stuck in some cache ... thanks a lot! From francis at daoine.org Sun Sep 22 20:36:02 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Sep 2013 21:36:02 +0100 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # In-Reply-To: <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> References: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> <20130922201344.GC19345@craic.sysops.org> <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> Message-ID: <20130922203602.GD19345@craic.sysops.org> On Sun, Sep 22, 2013 at 01:28:02PM -0700, jen142 at promessage.com wrote: > On Sun, Sep 22, 2013, at 01:13 PM, Francis Daly wrote: Hi there, > > proxy_set_header Host $host:12345; > That sounded promising, but, unfortunately ... no. > > Same beahvior -- initial reponse is without the portnum; add it > manually, and all's well. Fair enough. Can you learn which part of the system creates the initial response? And from what does it create it? With that information, you may be able to learn what needs changing to get the result you want. What is the output of curl -i https://passthru.mydomain.com:12345/ (possibly with a "-k" in there, if the cert is a problem)? f -- Francis Daly francis at daoine.org From jen142 at promessage.com Sun Sep 22 21:14:55 2013 From: jen142 at promessage.com (jen142 at promessage.com) Date: Sun, 22 Sep 2013 14:14:55 -0700 Subject: How to redirect only if/after a FAILED basic authentication? Message-ID: <1379884495.14867.25105377.0454D5F4@webmail.messagingengine.com> I'm setting up an auth-before-proxy_pass config. The following works now: location / { root /dev/null; auth_basic "Restricted Remote"; auth_basic_user_file /data/etc/security/auth/passwd.basic; proxy_pass https://mail-secure; proxy_set_header Host $host:12345; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Now, if a visitor: (1) enters bad (or no) crendentials (2) clicks "Cancel" on the BASIC auth dialog box the site displays a "401 Authorization Required" page. Instead, I want to add a rewrite on failed authorization. If I try: location / { root /dev/null; auth_basic "Restricted Remote"; auth_basic_user_file /data/etc/security/auth/passwd.basic; + error_page 401 = @redirect; proxy_pass https://mail-secure; proxy_set_header Host $host:12345; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } + location @redirect { + rewrite ^(.*)$ http://someothersite.com permanent; + } I get the redirect on EVERY visit -- never even getting the chance to enter credentials; i.e., the rewrite happens apparently BEFORE the auth step. I think this may be because: @ http://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error 401 UnauthorizedSimilar to 403 Forbidden, but specifically for use when authentication is required and has failed or **HAS NOT YET BEEN PROVIDED**.[2] The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication. and that I may have do the @redirect only if some header says "failed". How do I redirect ONLY if there's been a failed AUTH? From jen142 at promessage.com Sun Sep 22 21:15:50 2013 From: jen142 at promessage.com (jen142 at promessage.com) Date: Sun, 22 Sep 2013 14:15:50 -0700 Subject: Nginx as an AUTH + proxy_pass in front of a mail server on the LAN; I'm missing something about passing the port # In-Reply-To: <20130922203602.GD19345@craic.sysops.org> References: <1379869910.1868.25046833.7812188C@webmail.messagingengine.com> <20130922201344.GC19345@craic.sysops.org> <1379881682.26178.25095817.32F4678C@webmail.messagingengine.com> <20130922203602.GD19345@craic.sysops.org> Message-ID: <1379884550.15554.25106993.2956DDF3@webmail.messagingengine.com> > Fair enough. Our responses "crossed in the mail"! :-) Thanks, Jen From francis at daoine.org Sun Sep 22 22:04:48 2013 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Sep 2013 23:04:48 +0100 Subject: How to redirect only if/after a FAILED basic authentication? In-Reply-To: <1379884495.14867.25105377.0454D5F4@webmail.messagingengine.com> References: <1379884495.14867.25105377.0454D5F4@webmail.messagingengine.com> Message-ID: <20130922220448.GE19345@craic.sysops.org> On Sun, Sep 22, 2013 at 02:14:55PM -0700, jen142 at promessage.com wrote: Hi there, > Now, if a visitor: > > (1) enters bad (or no) crendentials > (2) clicks "Cancel" on the BASIC auth dialog box > > the site displays a > > "401 Authorization Required" > > page. For accuracy: at point (1), the server sends the 401 response. At point (2), the browser chooses to display the 401 response that the server had previously sent. > Instead, I want to add a rewrite on failed authorization. Doing that will break http on your server. Probably not a good idea. But if you really want to, you can probably configure nginx to do it for you. > + error_page 401 = @redirect; > I get the redirect on EVERY visit -- never even getting the chance to > enter credentials; i.e., the rewrite happens apparently BEFORE the auth > step. Not quite. Think about the different outputs from curl -v http://your-site/ and curl -v -u user:pass http://your-site/ and why they happen. > and that I may have do the @redirect only if some header says "failed". > > How do I redirect ONLY if there's been a failed AUTH? You get to define what you mean by "failed AUTH", since you don't want the "no valid credentials were provided" that nginx (and http) uses. Experiment with something like: === location @needauth { auth_basic "Restricted Remote"; auth_basic_user_file htpasswd; } location / { if ($http_authorization = "") { error_page 490 = @needauth; return 490; } auth_basic "Restricted Remote"; auth_basic_user_file htpasswd; error_page 401 = @redirect; # and the rest here } === to see if is close to what you want. But be aware that when you choose to break http on your server, you get to deal with any complaints from clients. Good luck with it, f -- Francis Daly francis at daoine.org From agentzh at gmail.com Sun Sep 22 22:46:47 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 22 Sep 2013 15:46:47 -0700 Subject: [ANN] ngx_openresty stable version 1.4.2.8 released Message-ID: Hello folks! I am happy to announce that the new stable version of ngx_openresty, 1.4.2.8, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (mainline) release, 1.4.2.7: * upgraded LuaNginxModule to 0.8.10. * bugfix: we did not declare the "level" local variable of "ngx_http_lua_ngx_log" at the beginning of the code block. thanks Edwin Cleton for the report. * docs: documented more limitations in the current implementation. * docs: avoided using module() and also recommended the lua-releng tool to locate misuse of Lua globals. The HTML version of the change log with some helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 The following components are bundled in this version: * LuaJIT-2.0.2 * array-var-nginx-module-0.03rc1 * auth-request-nginx-module-0.2 * drizzle-nginx-module-0.1.6 * echo-nginx-module-0.48 * encrypted-session-nginx-module-0.03 * form-input-nginx-module-0.07 * headers-more-nginx-module-0.22 * iconv-nginx-module-0.10 * lua-5.1.5 * lua-cjson-1.0.3 * lua-rds-parser-0.05 * lua-redis-parser-0.10 * lua-resty-dns-0.10 * lua-resty-memcached-0.11 * lua-resty-mysql-0.13 * lua-resty-redis-0.15 * lua-resty-string-0.08 * lua-resty-upload-0.08 * memc-nginx-module-0.13 * nginx-1.4.2 * ngx_coolkit-0.2rc1 * ngx_devel_kit-0.2.18 * ngx_lua-0.8.10 * ngx_postgres-1.0rc3 * rds-csv-nginx-module-0.05rc2 * rds-json-nginx-module-0.12rc10 * redis-nginx-module-0.3.6 * redis2-nginx-module-0.10 * set-misc-nginx-module-0.22 * srcache-nginx-module-0.22 * xss-nginx-module-0.03rc9 Just a quick heads-up: we're going to include proper WebSocket support in the next mainline version :) Enjoy! -agentzh From haifeng.813 at gmail.com Mon Sep 23 01:59:29 2013 From: haifeng.813 at gmail.com (=?utf-8?B?5rW35bOwIOWImA==?=) Date: Mon, 23 Sep 2013 09:59:29 +0800 Subject: Log module question: does the buffer mess up the order of the log entries? Message-ID: <6702FF1E-26CC-4175-A059-6340F0BCCF2D@gmail.com> Hi experts, I am reading the log module source code, there is something difficult to make sure, so I ask for your help. Access log module use a buffer to buffer log entries before writing to the file system, the buffer is initialised before the worker processes are forked, so I guess after the fork(), each worker has a copy, this also explains why there is no lock-unlock operations while using the buffer. To be sure about that, I did a simple test: 1, configure nginx to use 16k access log buffer, use the default keep-alive time(65), work in master-workers mode with a few worker processes; 2, open one browser, access nginx server, refresh a few times, no access log generated; 3, open another browser, do the same thing as 2, until the access log was flushed; I think there is a chance that the two browser was served by different worker processes, and log entries may be buffered in different buffers, which buffer get full first, which will be flush first. According that, the order of the log entries could be messed up. Unfortunately, I didn't see that after testing for a few times. My question is, Am I wrong about the log module behaviour, or I didn't get the right way to test it? From pigmej at gmail.com Mon Sep 23 07:16:19 2013 From: pigmej at gmail.com (Jedrzej Nowak) Date: Mon, 23 Sep 2013 09:16:19 +0200 Subject: ngx_lua + proxy_next_upstream In-Reply-To: References: Message-ID: Hey, Thanks for your reply. Is there any good "example" of thing what I want to archive ? Shall I create something like: location @blah { # here the "normal" configuration for LB } location / { # here the LUA logic # probably with share_all_vars=true # subrequest to @blah } Is something like that recommended or how should it be done ? Pozdrawiam J?drzej Nowak On Fri, Sep 20, 2013 at 2:34 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote: > > The question is how can I archive proxy_next_upstream. > > Preferably I would like to return to lua with a error reason. > > If the only way is to return several servers in upstream from lua, how > to do > > so ? > > > > If you want to return the control back to Lua and let your Lua code do > the upstream retries or something, then you should use the > ngx.location.capture() API instead to initiate an Nginx subrequest to > ngx_proxy: > > http://wiki.nginx.org/HttpLuaModule#ngx.location.capture > > Regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 23 08:46:12 2013 From: nginx-forum at nginx.us (matthewdavis9179) Date: Mon, 23 Sep 2013 04:46:12 -0400 Subject: Read quotes about courage Message-ID: <617ebc9c45cff45d8b204d4c2796850a.NginxMailingListEnglish@forum.nginx.org> Quotes have motivated many people from age groups and have assisted them to comprehend their way of way of lifestyle in a better way. They have been essential [url=http://www.searchquotes.com/quotes/about/Happy/][b]happy quotes[/b][/url] from plenty of your energy and effort personal came into way of way of lifestyle. Views are due to the excellent and excellent announcement of people and the sensation to implement it in the real globe. If a personal is not able to viewpoint the route supervised by his way of way of lifestyle then opinions can be of excellent help. They will not only help a personal to comprehend his way of way of lifestyle but can also help him to improve the course of it. If you think you are a willing personal or someone has designed you conscious about your willing actions, then you can know and viewpoint the various factors of your actions and how to use it for your excellent, [url=http://www.searchquotes.com/quotes/about/Courage/][b]quotes about courage[/b][/url]. These feedback are launched by people who have knowledgeable such type of circumstances and actions themselves and know the various factors of it. If you are considering where to find such opinions then there are two techniques. One is to analysis them through quotation guides and other is with the help of quotation sites. Quote sites provide a much simpler and simpler way to accomplish to opinions of different category. At such sites analysis a extensive comprehensive wide range of opinions and can look for particular type of [url=http://www.searchquotes.com/Inspirational/quotes/about/Life/][b]inspiring quotes about life[/b][/url] details whether essential opinions or crazy opinions or any others. If you absence motivation in way of way of lifestyle then viewing such type of sites and studying [url=http://www.searchquotes.com/quotes/author/Henry_David_Thoreau/][b]quotes from henry david thoreau[/b][/url] is definitely going to help you. You can also analysis inspirational opinions launched by excellent people by viewing these sites. If you really like anyone and want to analysis opinions by them then also analysis such type of opinions at these sites. These sites have huge quotation content to fulfill up with up with the needs of all the people. From Lady Look Biscuit Thank You Understands like gods elegance opinions to crazy opinions on connection to opinions on motivation, these sites have extensive comprehensive wide range of opinions. Select an excellent website and appreciate distance education opinions. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243055,243055#msg-243055 From luky-37 at hotmail.com Mon Sep 23 09:51:57 2013 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 23 Sep 2013 11:51:57 +0200 Subject: nginx-1.5.5 In-Reply-To: <20130917134326.GU57081@mdounin.ru> References: <20130917134326.GU57081@mdounin.ru> Message-ID: Hi! > *) Bugfix: OpenSSL 1.0.1f compatibility. > Thanks to Piotr Sikora. Since SSL_OP_MSIE_SSLV2_RSA_PADDING is more than obsolete now, shouldn't we remove it completely instead of just ifdef'ing it? At least in the 1.5 branch? Thanks! Lukas From vill.srk at gmail.com Mon Sep 23 10:03:34 2013 From: vill.srk at gmail.com (Vil Surkin) Date: Mon, 23 Sep 2013 13:03:34 +0300 Subject: Compare variable got from location with set of strings Message-ID: Hello, I have some location in configuration like this: location ~ /([A-z]+_[A-z0-9]+) { ? do something (got $1) ? } And i need to compare this '$1' with a set of strings. How can i do this? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Sep 23 11:36:41 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 15:36:41 +0400 Subject: Log module question: does the buffer mess up the order of the log entries? In-Reply-To: <6702FF1E-26CC-4175-A059-6340F0BCCF2D@gmail.com> References: <6702FF1E-26CC-4175-A059-6340F0BCCF2D@gmail.com> Message-ID: <20130923113641.GC2170@mdounin.ru> Hello! On Mon, Sep 23, 2013 at 09:59:29AM +0800, ?? ? wrote: > Hi experts, > > I am reading the log module source code, there is something > difficult to make sure, so I ask for your help. > > Access log module use a buffer to buffer log entries before > writing to the file system, the buffer is initialised before the > worker processes are forked, so I guess after the fork(), each > worker has a copy, this also explains why there is no > lock-unlock operations while using the buffer. To be sure about > that, I did a simple test: > > 1, configure nginx to use 16k access log buffer, use the default > keep-alive time(65), work in master-workers mode with a few > worker processes; > 2, open one browser, access nginx server, refresh a few times, > no access log generated; > 3, open another browser, do the same thing as 2, until the > access log was flushed; > > I think there is a chance that the two browser was served by > different worker processes, and log entries may be buffered in > different buffers, which buffer get full first, which will be > flush first. According that, the order of the log entries could > be messed up. Unfortunately, I didn't see that after testing for > a few times. > > My question is, Am I wrong about the log module behaviour, or I > didn't get the right way to test it? Yes, with buffering used log entries may easely be out of order. (Moreover, even without buffering nothing is guaranteed, even within a single process - a request made and served later from client's point of view, might end up being logged earlier. Mostly because logging happens once nginx thinks a request is complete, and this might disagree with client's point of view.) To somewhat limit possible log entries misorder with buffering there is the "flush" argument of the "access_log" directive as introduced in nginx 1.3.10. It's not normally needed on loaded servers as reasonably-sized buffers are filled in seconds, but may help in case of a varying load. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Sep 23 17:43:06 2013 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 23 Sep 2013 13:43:06 -0400 Subject: Transforming nginx for Windows In-Reply-To: <020a1514ec8b1fe773557616b307c2a2.NginxMailingListEnglish@forum.nginx.org> References: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> <020a1514ec8b1fe773557616b307c2a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Lua compiled in! Transforming nginx for Windows: http://forum.nginx.org/read.php?2,242426 https://groups.google.com/forum/#!forum/openresty-en (Lua nginx compiled for nginx windows) Builds can be found here: http://nginx-win.ecsds.eu/ 10:37 23-9-2013: nginx 1.5.6.1 Alice Based on nginx 1.5.6 (22-9-2013) with; + Streaming with nginx-rtmp-module, v1.0.4 (http://nginx-rtmp.blogspot.nl/) + lua-nginx-module v0.8.9 (tnx to agentzh about precompiled headers!) + LuaJIT-2.0.2 => (lua51.dll include / lua51.lib build) + Added lua51.dll (is required) + ngx_devel_kit v0.2.15 * Additional specifications are like 10:27 10-9-2013: B02 build Todo: - Still working on the multiple worker issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,243071#msg-243071 From vbart at nginx.com Mon Sep 23 17:48:29 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 23 Sep 2013 21:48:29 +0400 Subject: Log module question: does the buffer mess up the order of the log entries? In-Reply-To: <6702FF1E-26CC-4175-A059-6340F0BCCF2D@gmail.com> References: <6702FF1E-26CC-4175-A059-6340F0BCCF2D@gmail.com> Message-ID: <201309232148.29240.vbart@nginx.com> On Monday 23 September 2013 05:59:29 ?? ? wrote: > Hi experts, > > I am reading the log module source code, there is something difficult to > make sure, so I ask for your help. > > Access log module use a buffer to buffer log entries before writing to the > file system, the buffer is initialised before the worker processes are > forked, so I guess after the fork(), each worker has a copy, this also > explains why there is no lock-unlock operations while using the buffer. To > be sure about that, I did a simple test: > > 1, configure nginx to use 16k access log buffer, use the default keep-alive > time(65), work in master-workers mode with a few worker processes; 2, open > one browser, access nginx server, refresh a few times, no access log > generated; 3, open another browser, do the same thing as 2, until the > access log was flushed; > > I think there is a chance that the two browser was served by different > worker processes, and log entries may be buffered in different buffers, > which buffer get full first, which will be flush first. According that, > the order of the log entries could be messed up. Unfortunately, I didn't > see that after testing for a few times. > > My question is, Am I wrong about the log module behaviour, or I didn't get > the right way to test it? The order of access log entries is undefined anyway if you have more than one worker processes. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Sep 23 18:34:28 2013 From: nginx-forum at nginx.us (Pommi) Date: Mon, 23 Sep 2013 14:34:28 -0400 Subject: reverse proxy removes Transfer-Encoding: chunked Message-ID: <8d8c1bb9d35867ba56253bce8051bc62.NginxMailingListEnglish@forum.nginx.org> I'm trying to setup a nginx (1.4.1) reverse proxy to a HornetQ API using this configuration: proxy_http_version 1.1; proxy_set_header Host $host; upstream app { server 127.0.0.1:8000; keepalive 8; } server { listen 0.0.0.0:7000; server_name localhost; location / { deny all; } location = /messaging/ { proxy_pass http://app/messaging/; proxy_buffering off; } } After a lot of tcpdumping I see that that nginx removes the Transfer-Encoding header and sets the Content-Length header in place of it, which the length of the first 'chunk'. After that the connection gets reset. When sending the following headers: POST /messaging/ HTTP/1.1 Host: localhost Content-Type: application/octet-stream Transfer-Encoding: chunked Content-Transfer-Encoding: binary User-Agent: org.jboss.netty.channel.socket.http.HttpTunnelingClientSocketChannel nginx will forward them like: POST /messaging/ HTTP/1.1 Host: localhost Content-Length: 60 Content-Type: application/octet-stream Content-Transfer-Encoding: binary User-Agent: org.jboss.netty.channel.socket.http.HttpTunnelingClientSocketChannel Is this normal behaviour? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243073,243073#msg-243073 From mdounin at mdounin.ru Mon Sep 23 19:42:00 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Sep 2013 23:42:00 +0400 Subject: reverse proxy removes Transfer-Encoding: chunked In-Reply-To: <8d8c1bb9d35867ba56253bce8051bc62.NginxMailingListEnglish@forum.nginx.org> References: <8d8c1bb9d35867ba56253bce8051bc62.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130923194200.GJ2170@mdounin.ru> Hello! On Mon, Sep 23, 2013 at 02:34:28PM -0400, Pommi wrote: > I'm trying to setup a nginx (1.4.1) reverse proxy to a HornetQ API using > this configuration: > > proxy_http_version 1.1; > proxy_set_header Host $host; > upstream app { > server 127.0.0.1:8000; > keepalive 8; > } > server { > listen 0.0.0.0:7000; > server_name localhost; > location / { deny all; } > location = /messaging/ { > proxy_pass http://app/messaging/; > proxy_buffering off; > } > } > > After a lot of tcpdumping I see that that nginx removes the > Transfer-Encoding header and sets the Content-Length header in place of it, > which the length of the first 'chunk'. After that the connection gets > reset. > > When sending the following headers: > > POST /messaging/ HTTP/1.1 > Host: localhost > Content-Type: application/octet-stream > Transfer-Encoding: chunked > Content-Transfer-Encoding: binary > User-Agent: > org.jboss.netty.channel.socket.http.HttpTunnelingClientSocketChannel > > nginx will forward them like: > > POST /messaging/ HTTP/1.1 > Host: localhost > Content-Length: 60 > Content-Type: application/octet-stream > Content-Transfer-Encoding: binary > User-Agent: > org.jboss.netty.channel.socket.http.HttpTunnelingClientSocketChannel > > Is this normal behaviour? Yes, it's expected behaviour. Except it uses length of the full request, not length of the first chunk. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Sep 23 20:38:53 2013 From: nginx-forum at nginx.us (justin) Date: Mon, 23 Sep 2013 16:38:53 -0400 Subject: Preferred way to do redirects (rewrite or return) Message-ID: <477bb9159de664a8e70f30850cab9565.NginxMailingListEnglish@forum.nginx.org> What is the preferred way to do redirects? I know of two solutions: rewrite "^/help/?$" https://support.mydomain.com permanent; or location ^/help/?$ { return 301 https://support.mydomain.com; } I think I like using a location block and a return statement. Which is faster though and the standard? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243075,243075#msg-243075 From vbart at nginx.com Mon Sep 23 20:45:45 2013 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 24 Sep 2013 00:45:45 +0400 Subject: Preferred way to do redirects (rewrite or return) In-Reply-To: <477bb9159de664a8e70f30850cab9565.NginxMailingListEnglish@forum.nginx.org> References: <477bb9159de664a8e70f30850cab9565.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201309240045.46004.vbart@nginx.com> On Tuesday 24 September 2013 00:38:53 justin wrote: > What is the preferred way to do redirects? I know of two solutions: > > rewrite "^/help/?$" https://support.mydomain.com permanent; > > or > > location ^/help/?$ { > return 301 https://support.mydomain.com; > } > > I think I like using a location block and a return statement. Which is > faster though and the standard? > Faster: location =/help { return 301 https://support.mydomain.com; } location =/help/ { return 301 https://support.mydomain.com; } wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Sep 23 21:15:13 2013 From: nginx-forum at nginx.us (Sapherz) Date: Mon, 23 Sep 2013 17:15:13 -0400 Subject: Drainstop/Graceful stop? Message-ID: <976ba94de81d9a51bce7e3cb83b7d948.NginxMailingListEnglish@forum.nginx.org> Hi, Sorry if this is a but of a bumb question, but we're moving from NLB to using Nginx as a load balancer. Whats the best way to do an equivelient of a drain-stop to one upstram server? Would it be a graceful stop, then a quick service restart with a new .conf that don't have the server in question in it, or is there a nicer way of doing it? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243077,243077#msg-243077 From nginx-forum at nginx.us Mon Sep 23 22:44:45 2013 From: nginx-forum at nginx.us (neoascetic) Date: Mon, 23 Sep 2013 18:44:45 -0400 Subject: Override proxy's incorrect Content-Type via mime settings In-Reply-To: References: Message-ID: <5bd5fb71d93ecd5cd486f35136fe6a34.NginxMailingListEnglish@forum.nginx.org> Have same problem, but for different types of files. I have found a workaround using "map" directive, where pattern is an URI extension and value is a mime-type, but seems a little bit weird and requires to create a map from all possible extension. Is there any way to reuse standard "mime.types" together with Content-Type detection by URI extension? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,215744,243078#msg-243078 From contact at jpluscplusm.com Mon Sep 23 23:14:57 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 24 Sep 2013 00:14:57 +0100 Subject: Compare variable got from location with set of strings In-Reply-To: References: Message-ID: On 23 September 2013 11:03, Vil Surkin wrote: > Hello, > > I have some location in configuration like this: > location ~ /([A-z]+_[A-z0-9]+) { > ? do something (got $1) ? > } > > And i need to compare this '$1' with a set of strings. How can i do this? Use a named capture in the location line and the resulting variable won't get clobbered like a $1 would. HTH, Jonathan From igor at sysoev.ru Tue Sep 24 04:12:28 2013 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 24 Sep 2013 08:12:28 +0400 Subject: Drainstop/Graceful stop? In-Reply-To: <976ba94de81d9a51bce7e3cb83b7d948.NginxMailingListEnglish@forum.nginx.org> References: <976ba94de81d9a51bce7e3cb83b7d948.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sep 24, 2013, at 1:15 , Sapherz wrote: > Hi, > Sorry if this is a but of a bumb question, but we're moving from NLB to > using Nginx as a load balancer. Whats the best way to do an equivelient of a > drain-stop to one upstram server? Would it be a graceful stop, then a quick > service restart with a new .conf that don't have the server in question in > it, or is there a nicer way of doing it? http://nginx.org/en/docs/control.html#reconfiguration -- Igor Sysoev http://nginx.com From foxbin at gmail.com Tue Sep 24 06:34:41 2013 From: foxbin at gmail.com (FoxBin) Date: Tue, 24 Sep 2013 14:34:41 +0800 Subject: [Ask for help] Questions about proxy_cache and ssi Message-ID: Hello list, I hava a nginx config problem , Please help look ! I use proxy_cache and ssi. *nginx config :* ssi on; ssi_silent_errors on; ssi_types text/shtml; proxy_temp_path /cache/proxy_temp; proxy_cache_path /cache/proxy_cache levels=1:2 keys_zone=tmp_cache:2000m inactive=10000d max_size=200G; proxy_next_upstream error timeout invalid_header http_504 http_500 http_502 http_503; *example:* * * http://x.com/1.shtml: hello world1 ! <\html> http://x.com/2.shtml: hello world2 ! <\html> ....... http://x.com/include/foot.html: old, i am include info ! <\html> *problem:* * * Edit this file: http://x.com/include/include/foot.html new, i am include info ! <\html> I use ngx_cache_purge to clear the cache : http://x.com/include/include/foot.html But do not take effect cache these pages : open http://x.com/1.shtml the content is : hello world1 ! old, i am include info ! <\html> open http://x.com/2.shtml the content is : hello world2 ! old, i am include info ! <\html> I have to clear the cache in order to take effect : http://x.com/purge/1.shtml http://x.com/purge/2.shtml open http://x.com/1.shtml the content is : hello world1 ! new, i am include info ! <\html> open http://x.com/2.shtml the content is : hello world2 ! new, i am include info ! <\html> If I have a lot shtml files ,Must be more clear the cache. http://x.com/purge/1.shtml http://x.com/purge/2.shtml http://x.com/purge/3.shtml http://x.com/purge/4.shtml http://x.com/purge/5.shtml http://x.com/purge/6.shtml ...... I just need to clear a cache: http://x.com/purge/include/foot.html , all SHTML pages to take effect. Is there a way to solve? Thanks in advance :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 24 06:43:39 2013 From: nginx-forum at nginx.us (mex) Date: Tue, 24 Sep 2013 02:43:39 -0400 Subject: [Ask for help] Questions about proxy_cache and ssi In-Reply-To: References: Message-ID: <44f9267b18b77771783c0399c379b0e4.NginxMailingListEnglish@forum.nginx.org> Hell FoxBin, can you please post your whole proxy_* - config? since your footer.html gets included and displayed via 1/2/3.html, this file itself will never get cached, thus never can be purged, because it becomes part of the output of 1/2/3.html. "simple" caching is done based on URLs regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243085,243086#msg-243086 From pigmej at gmail.com Tue Sep 24 09:35:04 2013 From: pigmej at gmail.com (Jedrzej Nowak) Date: Tue, 24 Sep 2013 11:35:04 +0200 Subject: ngx_lua + proxy_next_upstream In-Reply-To: References: Message-ID: Ok, I still have some problems. It works but not perfectly. My config is: location /test { internal; rewrite /test(.*) $1 break; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_connect_timeout 10; proxy_send_timeout 30; proxy_read_timeout 30; proxy_pass http://$upstream; } location / { set $upstream ""; set $app_name ""; content_by_lua_file conf/lua_proxy.lua; } The question is how can I do NOT redirect ? I tried with @test instead of /test but no success. Is there any other way to do that ? lua looks like: [...] ngx.var.upstream = "192.168.1.10:9999" res = ngx.location.capture('/test' .. ngx.var.request_uri, {share_all_vars = true}) [...] Pozdrawiam J?drzej Nowak On Mon, Sep 23, 2013 at 9:16 AM, Jedrzej Nowak wrote: > Hey, > > Thanks for your reply. Is there any good "example" of thing what I want to > archive ? > > Shall I create something like: > > location @blah { > > # here the "normal" configuration for LB > > } > > > location / { > > # here the LUA logic > > # probably with share_all_vars=true > > # subrequest to @blah > > } > > > Is something like that recommended or how should it be done ? > > Pozdrawiam > J?drzej Nowak > > > On Fri, Sep 20, 2013 at 2:34 AM, Yichun Zhang (agentzh) > wrote: > >> Hello! >> >> On Wed, Sep 18, 2013 at 6:09 AM, Jedrzej Nowak wrote: >> > The question is how can I archive proxy_next_upstream. >> > Preferably I would like to return to lua with a error reason. >> > If the only way is to return several servers in upstream from lua, how >> to do >> > so ? >> > >> >> If you want to return the control back to Lua and let your Lua code do >> the upstream retries or something, then you should use the >> ngx.location.capture() API instead to initiate an Nginx subrequest to >> ngx_proxy: >> >> http://wiki.nginx.org/HttpLuaModule#ngx.location.capture >> >> Regards, >> -agentzh >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 24 13:12:54 2013 From: nginx-forum at nginx.us (robw) Date: Tue, 24 Sep 2013 09:12:54 -0400 Subject: limit_req and @named locations Message-ID: <6d0eabbd4fe07160cf528a7fc58c49ff.NginxMailingListEnglish@forum.nginx.org> Hi list I am experiencing some problems with a rate-limiting setup. I have a "global" limit_req declared in my http block. I also have additional limit_req declarations in various locations, both @named and unnamed, to provide proper protection to different backend endpoints. It seems that additional limit_req is working fine in unnamed locations, but being ignored in @named locations. I've linked to an example config exhibiting the problem: https://gist.github.com/anonymous/bf347d5302b3463f970b If I remove the limit_req in the http block, the @dynamic limit_req works perfectly. If I enable limit_req in the http block, the @dynamic limit_req is ignored. In both cases, the limit_req in the unnamed location works perfectly! Am I approaching this in the wrong way? How can I make limit_req work properly in the @dynamic location? thanks Rob Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243093,243093#msg-243093 From florian at narrans.de Tue Sep 24 14:24:31 2013 From: florian at narrans.de (Florian Obser) Date: Tue, 24 Sep 2013 14:24:31 +0000 Subject: nginx.conf(5) man page Message-ID: <20130924142431.GB7501@michelangelo.narrans.de> Hi, OpenBSD is working on replacing the (heavily patched) apache 1.3 in base with nginx. During that work the question was raised if we can have a nginx.conf(5) man page. As a proof of concept I put a perl script together which scrapes the pages on http://nginx.org/en/docs/ (below "Modules reference") and generates a mdoc(7) file. * what's the license of the documentation? * is the documentation on http://nginx.org/en/docs/ generated from some sort of source file? Parsing the html works reasonably well but is not optimal. * would there be interest to include a man page into the distribution once it's ready? Thanks, Florian -- I'm not entirely sure you are real. From maxim at nginx.com Tue Sep 24 14:35:32 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 24 Sep 2013 18:35:32 +0400 Subject: nginx.conf(5) man page In-Reply-To: <20130924142431.GB7501@michelangelo.narrans.de> References: <20130924142431.GB7501@michelangelo.narrans.de> Message-ID: <5241A334.6080204@nginx.com> Hi Florian. On 9/24/13 6:24 PM, Florian Obser wrote: > Hi, > OpenBSD is working on replacing the (heavily patched) apache 1.3 in > base with nginx. During that work the question was raised if we can > have a nginx.conf(5) man page. Nice to hear. > As a proof of concept I put a perl script together which scrapes the > pages on http://nginx.org/en/docs/ (below "Modules reference") and > generates a mdoc(7) file. > * what's the license of the documentation? It's under the same license as the whole nginx distribution: http://nginx.org/LICENSE > * is the documentation on http://nginx.org/en/docs/ generated from some > sort of source file? Parsing the html works reasonably well but is not > optimal. It is generated from xml files. The whole nginx.org repository is public (see "Source Code" section): http://nginx.org/en/download.html Also, you can explore it online: http://trac.nginx.org/nginx/browser/nginx_org Please note that http://nginx.org/en/docs/ has documentation for nginx f/oss and nginx-plus, our commercial product under commercial license. The features available in nginx-plus only have an appropriate note in the documentation. > * would there be interest to include a man page into the distribution > once it's ready? I think it's a good idea while we manage to keep a single source for both nginx.org docs and man page. -- Maxim Konovalov http://nginx.com From mdounin at mdounin.ru Tue Sep 24 14:38:32 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Sep 2013 18:38:32 +0400 Subject: limit_req and @named locations In-Reply-To: <6d0eabbd4fe07160cf528a7fc58c49ff.NginxMailingListEnglish@forum.nginx.org> References: <6d0eabbd4fe07160cf528a7fc58c49ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130924143832.GQ2170@mdounin.ru> Hello! On Tue, Sep 24, 2013 at 09:12:54AM -0400, robw wrote: > Hi list > > I am experiencing some problems with a rate-limiting setup. > > I have a "global" limit_req declared in my http block. > > I also have additional limit_req declarations in various locations, both > @named and unnamed, to provide proper protection to different backend > endpoints. > > It seems that additional limit_req is working fine in unnamed locations, but > being ignored in @named locations. > > I've linked to an example config exhibiting the problem: > https://gist.github.com/anonymous/bf347d5302b3463f970b > > If I remove the limit_req in the http block, the @dynamic limit_req works > perfectly. If I enable limit_req in the http block, the @dynamic limit_req > is ignored. In both cases, the limit_req in the unnamed location works > perfectly! > > Am I approaching this in the wrong way? How can I make limit_req work > properly in the @dynamic location? The limit_req directives are executed only once per request lifetime. If a request was checked against any limits configured in a location - regardless of future internal redirects further limits won't be checked. This is a simple way to protect requests from checking against the same limits again and again after redirections. In your configuration limit_req STATIC_LIMIT_REQ is checked for all requests matching location /, and requests are redirected to @dynamic location via try_files only after this. There are two basic ways to fix things: 1) Select a location before limit_req's are executed. Recommended way is to use separate URL prefixes and use prefix string locations, like you already do with /admin. (Alternatively, you may also use return + error_page to switch to a named location as rewrite module directives are executed before limit_req's, but this kind of defeats simplicity of try_files.) 2) Avoid using any limits in locations which aren't a final match. This is essentially what happens if you remove the limit_req in the http block. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Sep 24 14:44:07 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Sep 2013 18:44:07 +0400 Subject: nginx.conf(5) man page In-Reply-To: <5241A334.6080204@nginx.com> References: <20130924142431.GB7501@michelangelo.narrans.de> <5241A334.6080204@nginx.com> Message-ID: <20130924144407.GR2170@mdounin.ru> Hello! On Tue, Sep 24, 2013 at 06:35:32PM +0400, Maxim Konovalov wrote: [...] > > * would there be interest to include a man page into the distribution > > once it's ready? > > I think it's a good idea while we manage to keep a single source for > both nginx.org docs and man page. I actually don't think it's a good idea. We intentionally have the documentation separated from the source code - it allows to do releases and documentation editing/updating separately (and this happens often). A separate distribution for a manpage might be a better idea. -- Maxim Dounin http://nginx.org/en/donation.html From maxim at nginx.com Tue Sep 24 14:45:50 2013 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 24 Sep 2013 18:45:50 +0400 Subject: nginx.conf(5) man page In-Reply-To: <20130924144407.GR2170@mdounin.ru> References: <20130924142431.GB7501@michelangelo.narrans.de> <5241A334.6080204@nginx.com> <20130924144407.GR2170@mdounin.ru> Message-ID: <5241A59E.8050509@nginx.com> On 9/24/13 6:44 PM, Maxim Dounin wrote: > Hello! > > On Tue, Sep 24, 2013 at 06:35:32PM +0400, Maxim Konovalov wrote: > > [...] > >>> * would there be interest to include a man page into the distribution >>> once it's ready? >> >> I think it's a good idea while we manage to keep a single source for >> both nginx.org docs and man page. > > I actually don't think it's a good idea. We intentionally > have the documentation separated from the source code - it allows > to do releases and documentation editing/updating separately (and > this happens often). A separate distribution for a manpage might > be a better idea. > Ability to generate man page from xml files doesn't hurt. -- Maxim Konovalov http://nginx.com From mdounin at mdounin.ru Tue Sep 24 14:54:17 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Sep 2013 18:54:17 +0400 Subject: nginx.conf(5) man page In-Reply-To: <5241A59E.8050509@nginx.com> References: <20130924142431.GB7501@michelangelo.narrans.de> <5241A334.6080204@nginx.com> <20130924144407.GR2170@mdounin.ru> <5241A59E.8050509@nginx.com> Message-ID: <20130924145417.GS2170@mdounin.ru> Hello! On Tue, Sep 24, 2013 at 06:45:50PM +0400, Maxim Konovalov wrote: > On 9/24/13 6:44 PM, Maxim Dounin wrote: > > Hello! > > > > On Tue, Sep 24, 2013 at 06:35:32PM +0400, Maxim Konovalov wrote: > > > > [...] > > > >>> * would there be interest to include a man page into the distribution > >>> once it's ready? > >> > >> I think it's a good idea while we manage to keep a single source for > >> both nginx.org docs and man page. > > > > I actually don't think it's a good idea. We intentionally > > have the documentation separated from the source code - it allows > > to do releases and documentation editing/updating separately (and > > this happens often). A separate distribution for a manpage might > > be a better idea. > > > Ability to generate man page from xml files doesn't hurt. Sure, as part of the nginx.org repository. But I believe the original question was about including the resulting nginx.conf(5) manpage into nginx distribution, which intentionally doesn't include documentation (except minimal nginx(8) manpage). -- Maxim Dounin http://nginx.org/en/donation.html From gchodos at gmail.com Tue Sep 24 17:55:22 2013 From: gchodos at gmail.com (Gary Chodos) Date: Tue, 24 Sep 2013 13:55:22 -0400 Subject: Proxy to upstream HTTPS server *without* any keys/certs in nginx In-Reply-To: References: Message-ID: Hello, We are researching which tools would allow us to do what is described in the subject. After searching the archives here and in other places like stackoverflow, there seems to be conflicting info on whether this is possible. Perhaps it was not doable early in nginx's life but is now? Based on the below link (which notes the upstream and reverse proxy modules), can we now have nginx listen on 443, and pass browser requests to it on to an upstream HTTPS server which actually serves content, has the certs/keys and takes care of SSL handshake etc? In our use case we cannot house any keys/certs on the nginx box so must proxy everything (including SSL) to the upstream https box, as if the end user (who makes the request from the browser) hit the upstream server directly, and doesn't have any missing or mismatching certificate errors. http://stackoverflow.com/questions/15394904/nginx-load-balance-with-upstream-ssl/15400260#15400260 I hope my question is clear. Thanks for your help. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Sep 24 18:23:54 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 24 Sep 2013 19:23:54 +0100 Subject: Proxy to upstream HTTPS server *without* any keys/certs in nginx In-Reply-To: References: Message-ID: On 24 Sep 2013 18:55, "Gary Chodos" wrote: > > Hello, > > We are researching which tools would allow us to do what is described in the subject. > > After searching the archives here and in other places like stackoverflow, there seems to be conflicting info on whether this is possible. Perhaps it was not doable early in nginx's life but is now? Based on the below link (which notes the upstream and reverse proxy modules), can we now have nginx listen on 443, and pass browser requests to it on to an upstream HTTPS server which actually serves content, has the certs/keys and takes care of SSL handshake etc? I don't believe so, no. > In our use case we cannot house any keys/certs on the nginx box so must proxy everything (including SSL) to the upstream https box, as if the end user (who makes the request from the browser) hit the upstream server directly, and doesn't have any missing or mismatching certificate errors. It sounds like you just need a TCP-layer proxy. I suggest HAProxy in TCP mode. > http://stackoverflow.com/questions/15394904/nginx-load-balance-with-upstream-ssl/15400260#15400260 I don't believe the answer there is correct. I don't believe you can reverse-proxy an SSL connection into nginx without terminating it first, using local certs. I will happily be shown I'm wrong, however :-) HTH, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Sep 24 19:23:52 2013 From: nginx-forum at nginx.us (robw) Date: Tue, 24 Sep 2013 15:23:52 -0400 Subject: limit_req and @named locations In-Reply-To: <6d0eabbd4fe07160cf528a7fc58c49ff.NginxMailingListEnglish@forum.nginx.org> References: <6d0eabbd4fe07160cf528a7fc58c49ff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4ad52d42a1d135979e727de43ee712a9.NginxMailingListEnglish@forum.nginx.org> Thank you for clarifying Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243093,243105#msg-243105 From etienne.champetier at free.fr Tue Sep 24 20:19:51 2013 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Tue, 24 Sep 2013 22:19:51 +0200 (CEST) Subject: Nginx convert UTF8 request to ISO-8859-1 In-Reply-To: <1826574006.549782708.1380050539419.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <447602800.549912033.1380053991384.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi, IE8 (maybe also IE9/IE10) doesn't auto encode url (firefox do), and can make utf8 requests If you put "http:///?test=???" in the address bar, the ? will not be html encoded, and will be sent encoded in utf8 (c3a9 in hex, i've checked with wireshark) The problem is that the fastcgi backend (mono webapp, unix socket) get the ? in ISO-8859-1 (e9 in hex, i've checked with socat) Is it normal that nginx (1.4.1) convert the request encoding from UTF8 to ISO-8859-1? Is there a workaround (linux/nginx conf)? (haven't found any yet) What the RFCs are saying? (HTTP request encoding, Fastcgi param encoding) I'm not using rewrite in nginx, i'm just passing the request to a fastcgi unix socket. (I will provide a minimal test conf/do more tests tomorrow) To reproduce: Request: curl -O "http:///?test=???" Request capture: sudo tcpdump -X 'port 80' Backend capture: sudo socat -t100 -x -v UNIX-LISTEN:/path/to/sock,mode=777,reuseaddr,fork UNIX-CONNECT:/path/to/sock.original http://superuser.com/questions/484671/can-i-monitor-a-local-unix-domain-socket-like-tcpdump Thanks in advance Etienne From agentzh at gmail.com Tue Sep 24 20:28:06 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 24 Sep 2013 13:28:06 -0700 Subject: ngx_lua + proxy_next_upstream In-Reply-To: References: Message-ID: Hello! On Tue, Sep 24, 2013 at 2:35 AM, Jedrzej Nowak wrote: > > The question is how can I do NOT redirect ? Well, "rewrite ... break" is not a redirect. It is just an internal URI rewrite. That's all. > I tried with @test instead of > /test but no success. Is there any other way to do that ? > Named locations can only work with internal redirects. They do not support Nginx subrequests. You can ask the Nginx team to add support for that to the Nginx core. > [...] > ngx.var.upstream = "192.168.1.10:9999" > res = ngx.location.capture('/test' .. ngx.var.request_uri, > {share_all_vars = true}) > [...] > Please note that setting the "share_all_vars" to true for your subrequests are genreally a bad idea. Because there could be really bad side effects. In your example, all you need is to enable the "copy_all_vars" option. BTW, you may want to post such questions to the openresty-en mailing list instead: https://groups.google.com/group/openresty-en Best regards, -agentzh From francis at daoine.org Tue Sep 24 21:47:41 2013 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Sep 2013 22:47:41 +0100 Subject: Nginx convert UTF8 request to ISO-8859-1 In-Reply-To: <447602800.549912033.1380053991384.JavaMail.root@zimbra65-e11.priv.proxad.net> References: <1826574006.549782708.1380050539419.JavaMail.root@zimbra65-e11.priv.proxad.net> <447602800.549912033.1380053991384.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <20130924214741.GI19345@craic.sysops.org> On Tue, Sep 24, 2013 at 10:19:51PM +0200, etienne.champetier at free.fr wrote: Hi there, > If you put "http:///?test=???" in the address bar, the ? will not > be html encoded, and will be sent encoded in utf8 (c3a9 in hex, i've checked with wireshark) > > The problem is that the fastcgi backend (mono webapp, unix socket) > get the ? in ISO-8859-1 (e9 in hex, i've checked with socat) When I use: == server { listen 8080; location = / { fastcgi_param QUERY_STRING $query_string; fastcgi_pass 127.0.0.1:9; } } == and tcpdump -nn -i any -X -s 0 port 8080 or port 9 and curl http://localhost:8080/?key= followed by some bytes, I don't see any difference in the bytes in the to-8080 "GET /?key=" and the to-9 "QUERY_STRINGkey=" parts of the tcpdump output. What am I doing that is different to you? f -- Francis Daly francis at daoine.org From pigmej at gmail.com Tue Sep 24 22:39:48 2013 From: pigmej at gmail.com (pigmej at gmail.com) Date: Tue, 24 Sep 2013 22:39:48 +0000 Subject: ngx_lua + proxy_next_upstream In-Reply-To: References: Message-ID: <1217201665-1380062390-cardhu_decombobulator_blackberry.rim.net-1644375861-@b3.c19.bise7.blackberry> Yeah, I meant rewrite obviously... I would still prefer to not have even rewrite if it's possible. I wonder why share_all_vars is not safe. Any serious consideration / example / use case ? Why it's better to copy them instead ? (What about memory footprint etc). And I will probably send the questions to openresty group too. Thanks for your replies. -----Original Message----- From: "Yichun Zhang (agentzh)" Sender: nginx-bounces at nginx.orgDate: Tue, 24 Sep 2013 13:28:06 To: Reply-To: nginx at nginx.org Subject: Re: ngx_lua + proxy_next_upstream Hello! On Tue, Sep 24, 2013 at 2:35 AM, Jedrzej Nowak wrote: > > The question is how can I do NOT redirect ? Well, "rewrite ... break" is not a redirect. It is just an internal URI rewrite. That's all. > I tried with @test instead of > /test but no success. Is there any other way to do that ? > Named locations can only work with internal redirects. They do not support Nginx subrequests. You can ask the Nginx team to add support for that to the Nginx core. > [...] > ngx.var.upstream = "192.168.1.10:9999" > res = ngx.location.capture('/test' .. ngx.var.request_uri, > {share_all_vars = true}) > [...] > Please note that setting the "share_all_vars" to true for your subrequests are genreally a bad idea. Because there could be really bad side effects. In your example, all you need is to enable the "copy_all_vars" option. BTW, you may want to post such questions to the openresty-en mailing list instead: https://groups.google.com/group/openresty-en Best regards, -agentzh _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From B22173 at freescale.com Wed Sep 25 00:40:04 2013 From: B22173 at freescale.com (Myla John-B22173) Date: Wed, 25 Sep 2013 00:40:04 +0000 Subject: JSON REST APIs Message-ID: Hi, Are there any JSON APIs defined for Nginx Configuration? Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Sep 25 06:21:14 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 24 Sep 2013 23:21:14 -0700 Subject: ngx_lua + proxy_next_upstream In-Reply-To: <1217201665-1380062390-cardhu_decombobulator_blackberry.rim.net-1644375861-@b3.c19.bise7.blackberry> References: <1217201665-1380062390-cardhu_decombobulator_blackberry.rim.net-1644375861-@b3.c19.bise7.blackberry> Message-ID: Hello! On Tue, Sep 24, 2013 at 3:39 PM, pigmej wrote: > Yeah, I meant rewrite obviously... I would still prefer to not have even rewrite if it's possible. > It's not worth saving at all. If you take an on-CPU Flame Graph for your loaded Nginx worker processes, you'll never even see it on the graph. You'd better put your optimization efforts on something that is truly measurable. See also https://github.com/agentzh/nginx-systemtap-toolkit#ngx-sample-bt > I wonder why share_all_vars is not safe. Any serious consideration / example / use case ? Why it's better to copy them instead ? (What about memory footprint etc). > Because of side effects involved in sharing variables between the parent request and the subrequest. You can find such scary examples in my Nginx tutorials (still under work!): http://openresty.org/download/agentzh-nginx-tutorials-en.html > And I will probably send the questions to openresty group too. > It's just that you're more likely to get more responses more quickly for such questions. That's all. Regards, -agentzh From nginx-forum at nginx.us Wed Sep 25 07:47:39 2013 From: nginx-forum at nginx.us (pradeepmohan) Date: Wed, 25 Sep 2013 03:47:39 -0400 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: Message-ID: <4e2b9877a54f791623352eab2754928a.NginxMailingListEnglish@forum.nginx.org> Hi, Im getting following error while add new module nginx_udplog_module-1.0.0 after nginx installation. Please Note : nginx version nginx-1.0.4.tar.gz . /root/SETUP-PACKAGE-BCK/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function ?ngx_udplog_init_endpoint?: /root/SETUP-PACKAGE-BCK/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:284: error: incompatible types in assignment /root/SETUP-PACKAGE-BCK/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function ?ngx_http_udplogger_send?: /root/SETUP-PACKAGE-BCK/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: invalid type argument of ?->? /root/SETUP-PACKAGE-BCK/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: incompatible type for argument 2 of ?ngx_log_error_core? make[1]: *** [objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o] Error 1 make[1]: Leaving directory `/root/nginx-1.0.4' make: *** [build] Error 2 Thanks, PradeeMohan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223048,243119#msg-243119 From emailgrant at gmail.com Wed Sep 25 08:07:05 2013 From: emailgrant at gmail.com (Grant) Date: Wed, 25 Sep 2013 01:07:05 -0700 Subject: All webapps behind nginx reverse proxy by port? Message-ID: I'm thinking of using nginx as a reverse proxy for all of my administrative webapps so I can keep them under nice tight control. Is this a good idea? Would you use port numbers to separate each of them? - Grant From lists at ruby-forum.com Wed Sep 25 08:18:15 2013 From: lists at ruby-forum.com (pacolotero pacolotero) Date: Wed, 25 Sep 2013 10:18:15 +0200 Subject: Nginx static files Message-ID: <85906fc29cfb2cb534247816b646358d@ruby-forum.com> I have a rails server and I would like to serve all this static files *. jpg, *. png, *. css, *. js, *. gif, *. jpeg with nginx This is my actual configuration server { listen 443; server_name emotionalworld.co www.emotionalworld.co; rewrite ^ http://$server_name$request_uri? permanent; # enforce https } server { listen 80; server_name emotionalworld.co www.emotionalworld.co; access_log /var/log/nginx/chili-proxy-access; error_log /var/log/nginx/chili-proxy-error; expires epoch; include includes/proxy.include; proxy_redirect off; root /home/jaranda/jaranda-emotionalworld; # Send sensitive stuff via https # rewrite ^/login(.*) https://your.domain.here$request_uri permanent; # rewrite ^/my/account(.*) https://your.domain.here$request_uri permanent; # rewrite ^/my/password(.*) https://your.domain.here$request_uri permanent; # rewrite ^/admin(.*) https://your.domain.here$request_uri permanent; location / { # auth_basic "Acceso de solo invitados"; # auth_basic_user_file /etc/nginx/htpasswd; try_files $uri/index.html $uri.html $uri @fallback; } location @fallback { proxy_pass http://127.0.0.1:9000; # proxy_set_header X-Forwarded-Protocol https; # proxy_connect_timeout 15; # proxy_redirect off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy_set_header X-Forwarded-Proto https; } } -- Posted via http://www.ruby-forum.com/. From emailgrant at gmail.com Wed Sep 25 08:18:23 2013 From: emailgrant at gmail.com (Grant) Date: Wed, 25 Sep 2013 01:18:23 -0700 Subject: All webapps behind nginx reverse proxy by port? In-Reply-To: References: Message-ID: > I'm thinking of using nginx as a reverse proxy for all of my > administrative webapps so I can keep them under nice tight control. > Is this a good idea? Would you use port numbers to separate each of > them? > > - Grant On second thought, this wouldn't be a reverse proxy setup in every instance. Some webapps could be served straight from nginx, but is configuring them into separate nginx servers a good idea for more control? I'm trying to find out if I'm thinking straight about this. - Grant From nginx-forum at nginx.us Wed Sep 25 08:52:53 2013 From: nginx-forum at nginx.us (wsseoer) Date: Wed, 25 Sep 2013 04:52:53 -0400 Subject: Choose the correct digital photo frame for your family photo Message-ID: Ever heard that the photo slideshows can add great fun and personality to any wedding party? Well, there are instances in which a couple wanted to show their wedding guests just how they fell in love with each other and thus they planned and also presented a photo slide show that showcased how they grew up as children and then as couple. And of course all these were done with the help of the aristocratic and elegant digital photo frame. photo frames frame moulding mirror frames skirting board It is a truly great idea to remind you as a wedding guest where you have been invited as a guest and how special it feels for the couple that you are there in their wedding. This is because the photo slide shows are not only about the couple, but also about the guests present in the wedding party. In fact the slide shows have become quite a happening thing in the weddings. As per the expert photographers, the photos must be done well in order to make them look great instead of just good. Thus choosing the correct and the perfect digital photo frame is also a thing that matters most. As per the definitions of the Wikipedia, a digital media frame is a picture frame that has the ability to display the digital photos without the need to print them or view them with the help of a computer. It is because of this great convenience that the digital photo frame enjoys a continuing popularity all over the world. The popularity is not only among the whole sale electronics manufacturers, but also the electronic consumers too. But usually not all people are satisfied with the digital photo frames that they buy from the electronic online store. To avoid any inconvenience related to the digital media frame that you buy from the online store; here are a few things that you must keep in mind. Fore mostly, if you wish to get a clear and a sharp picture, make sure to buy the frame that has a resolution of higher than 640 * 480 pixels. Also keep in mind the size and the aspect ratio of the digital media frame while buying. They usually come in two different aspect ratios of 4:3 and 16:9. With all these important information in mind, it will now be much easier for you to make a correct choice of the digital media frame. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243124,243124#msg-243124 From etienne.champetier at free.fr Wed Sep 25 11:31:27 2013 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Wed, 25 Sep 2013 13:31:27 +0200 (CEST) Subject: Nginx convert UTF8 request to ISO-8859-1 In-Reply-To: <20130924214741.GI19345@craic.sysops.org> Message-ID: <228433250.551809991.1380108687765.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi, ----- Mail original ----- > De: "Francis Daly" > ?: nginx at nginx.org > Envoy?: Mardi 24 Septembre 2013 23:47:41 > Objet: Re: Nginx convert UTF8 request to ISO-8859-1 > > On Tue, Sep 24, 2013 at 10:19:51PM +0200, etienne.champetier at free.fr > wrote: > > Hi there, > > > If you put "http:///?test=???" in the address bar, > > the ? will not > > be html encoded, and will be sent encoded in utf8 (c3a9 in hex, > > i've checked with wireshark) > > > > The problem is that the fastcgi backend (mono webapp, unix socket) > > get the ? in ISO-8859-1 (e9 in hex, i've checked with socat) > > When I use: > > == > server { > listen 8080; > location = / { > fastcgi_param QUERY_STRING $query_string; > fastcgi_pass 127.0.0.1:9; > } > } > == > > and > > tcpdump -nn -i any -X -s 0 port 8080 or port 9 > > and > > curl http://localhost:8080/?key= > > followed by some bytes, I don't see any difference in the bytes in > the to-8080 "GET /?key=" and the to-9 "QUERY_STRINGkey=" parts of the > tcpdump output. > > What am I doing that is different to you? Sorry today i'm not able to reproduce my 'bug' Also not able to send utf8 url with IE We (me and my collegue) must have misread the wireshark dump... (http://en.wikipedia.org/wiki/User_error#PEBKAC) With curl & IE i've tested nginx works perfectly (UTF8 in - UTF8 out / Latin1 in - Latin1 out) Thanks and sorry > > f > -- > Francis Daly francis at daoine.org > > From nginx-forum at nginx.us Wed Sep 25 12:13:11 2013 From: nginx-forum at nginx.us (optimum.dulopin) Date: Wed, 25 Sep 2013 08:13:11 -0400 Subject: 40 bad request and UTF8 Message-ID: <093ce4d330e94cd828a6e425eae32393.NginxMailingListEnglish@forum.nginx.org> Hi, Im using nginx and rails for my site which contains url with georgian letters ie ???????????? so something like http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98 It is mainly working perfectly but sometimes I receive request with truncated url ie 1 - http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%9 (as u can see it should be something after %9) or 2 - http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83%9 I succeeded to deal when there is no get parameters (first url above) and make in that case a redirection to / when this happen, this line is added to nginx error.log 2013/09/24 00:46:53 [alert] 63547#0: *19359227 pcre_exec() failed: -10 on "/ka/????????????" using "", client: aa.bb.cc.dd, server: gancxadebebi.ge, request: "GET /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%8 HTTP/1.1", host: "gancxadebebi.ge" but for second url, which have get parameter truncated, I can not handle that which generate a 400 bad request page. such request added this line in nginx access.log aa.bb.cc.dd - - [24/Sep/2013:00:48:47 +0200] "GET /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83% HTTP/1.1" 400 5 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36" does this mean that nginx accepted the request and then rails coudnt resolve it ? I don't know if problem come from rails or from nginx. For first url, I solved it in nginx conf here part of my conf access_log /var/log/nginx/gancx.access.log; error_log /var/log/nginx/gancx.error.log; client_body_in_file_only clean; client_body_buffer_size 32K; charset UTF-8; source_charset UTF-8; client_max_body_size 300M; error_page 400 404 = @notfound; error_page 500 502 504 = @server_error; error_page 503 = @maintenance; location @notfound { rewrite ^(.*)$ $scheme://$host permanent; } location @server_error { rewrite ^(.*)$ $scheme://$host permanent; } location @maintenance { rewrite ^(.*)$ $scheme://$host permanent; } sendfile on; send_timeout 300s; location / { proxy_pass http://gancx; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; charset UTF-8; client_max_body_size 7m; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } thanks for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243130,243130#msg-243130 From lists at ruby-forum.com Wed Sep 25 12:17:44 2013 From: lists at ruby-forum.com (john lee) Date: Wed, 25 Sep 2013 14:17:44 +0200 Subject: 400 bad request and utf8 Message-ID: <00ef70962852550d080d2d845be83550@ruby-forum.com> Hi, Im using nginx and rails for my site which contains url with georgian letters ie ???????????? so something like http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98 It is mainly working perfectly but sometimes I receive request with truncated url ie 1 - http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%9 (as u can see it should be something after %9) or 2 - http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83%9 I succeeded to deal when there is no get parameters (first url above) and make in that case a redirection to / when this happen, this line is added to nginx error.log 2013/09/24 00:46:53 [alert] 63547#0: *19359227 pcre_exec() failed: -10 on "/ka/????????????" using "", client: aa.bb.cc.dd, server: gancxadebebi.ge, request: "GET /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%8 HTTP/1.1", host: "gancxadebebi.ge" but for second url, which have get parameter truncated, I can not handle that which generate a 400 bad request page. such request added this line in nginx access.log aa.bb.cc.dd - - [24/Sep/2013:00:48:47 +0200] "GET /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83% HTTP/1.1" 400 5 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36" does this mean that nginx accepted the request and then rails coudnt resolve it ? I don't know if problem come from rails or from nginx. For first url, I solved it in nginx conf here part of my conf access_log /var/log/nginx/gancx.access.log; error_log /var/log/nginx/gancx.error.log; client_body_in_file_only clean; client_body_buffer_size 32K; charset UTF-8; source_charset UTF-8; client_max_body_size 300M; error_page 400 404 = @notfound; error_page 500 502 504 = @server_error; error_page 503 = @maintenance; location @notfound { rewrite ^(.*)$ $scheme://$host permanent; } location @server_error { rewrite ^(.*)$ $scheme://$host permanent; } location @maintenance { rewrite ^(.*)$ $scheme://$host permanent; } sendfile on; send_timeout 300s; location / { proxy_pass http://gancx; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; charset UTF-8; client_max_body_size 7m; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } thanks for your help -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Wed Sep 25 12:28:27 2013 From: nginx-forum at nginx.us (foxbin) Date: Wed, 25 Sep 2013 08:28:27 -0400 Subject: [Ask for help] Questions about proxy_cache and ssi In-Reply-To: References: Message-ID: <199a54e8e316451aa874c6eab9134fba.NginxMailingListEnglish@forum.nginx.org> hi,mex : this is my nginx config: user nobody; worker_rlimit_nofile 65500; worker_processes 8; error_log logs/error.log; events { use epoll; worker_connections 65500; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip off; client_max_body_size 1g; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ssi on; ssi_silent_errors on; ssi_types text/shtml; index index.shtml index.html index.htm; access_log logs/access.log ; proxy_temp_path /cache/proxy_temp; proxy_cache_path /cache/proxy_cache levels=1:2 keys_zone=tmp_cache:2000m inactive=10000d max_size=200G; proxy_next_upstream error timeout invalid_header http_504 http_500 http_502 http_503; upstream backend { server 192.168.1.100; server 192.168.1.101; server 192.168.1.102 backup; } server { server_name www.x.com ; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://backend; proxy_cache tmp_cache; proxy_cache_key $host$uri; proxy_cache_valid 1000d; } location ~ /purge(/.*) { allow 192.168.1.254; deny all; proxy_cache_purge tmp_cache $host$1$is_args$args; } } shtml files: www.x.com/1.shtml www.x.com/2.shtml www.x.com/3.shtml www.x.com/4.shtml www.x.com/5.shtml www.x.com/6.shtml ......more include files: http://www.x.com/purge/include/foot.html old, i am include info ! Edit this file: http://www.x.com/include/include/foot.html new, i am include info ! <\html> I use ngx_cache_purge to clear the cache : http://www.x.com/purge/include/foot.html But do not take effect cache these pages : open http://www.x.com/1.shtml the content is : old, i am include info ! <\html> open http://www.x.com/2.shtml the content is : old, i am include info ! <\html> I have to clear the cache in order to take effect : http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml open http://www.x.com/1.shtml the content is : hello world1 ! new, i am include info ! <\html> open http://www.x.com/2.shtml the content is : hello world2 ! new, i am include info ! <\html> If I have a lot shtml files ,Must be more clear the cache :( http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml http://www.x.com/purge/3.shtml http://www.x.com/purge/4.shtml http://www.x.com/purge/5.shtml http://www.x.com/purge/6.shtml .......more I just need to clear a cache: http://x.com/purge/include/foot.html , all SHTML pages to take effect. Is there a way to solve? Thanks in advance :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243085,243088#msg-243088 From nginx-forum at nginx.us Wed Sep 25 12:28:33 2013 From: nginx-forum at nginx.us (foxbin) Date: Wed, 25 Sep 2013 08:28:33 -0400 Subject: [Ask for help] Questions about proxy_cache and ssi In-Reply-To: References: Message-ID: <12d2b4c328ae1db690a207a0542ac6fe.NginxMailingListEnglish@forum.nginx.org> hi,mex : this is my nginx config: user nobody; worker_rlimit_nofile 65500; worker_processes 8; error_log logs/error.log; events { use epoll; worker_connections 65500; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip off; client_max_body_size 1g; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ssi on; ssi_silent_errors on; ssi_types text/shtml; index index.shtml index.html index.htm; access_log logs/access.log ; proxy_temp_path /cache/proxy_temp; proxy_cache_path /cache/proxy_cache levels=1:2 keys_zone=tmp_cache:2000m inactive=10000d max_size=200G; proxy_next_upstream error timeout invalid_header http_504 http_500 http_502 http_503; upstream backend { server 192.168.1.100; server 192.168.1.101; server 192.168.1.102 backup; } server { server_name www.x.com ; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://backend; proxy_cache tmp_cache; proxy_cache_key $host$uri; proxy_cache_valid 1000d; } location ~ /purge(/.*) { allow 192.168.1.254; deny all; proxy_cache_purge tmp_cache $host$1$is_args$args; } } shtml files: www.x.com/1.shtml www.x.com/2.shtml www.x.com/3.shtml www.x.com/4.shtml www.x.com/5.shtml www.x.com/6.shtml ......more include files: http://www.x.com/purge/include/foot.html old, i am include info ! Edit this file: http://www.x.com/include/include/foot.html new, i am include info ! <\html> I use ngx_cache_purge to clear the cache : http://www.x.com/purge/include/foot.html But do not take effect cache these pages : open http://www.x.com/1.shtml the content is : old, i am include info ! <\html> open http://www.x.com/2.shtml the content is : old, i am include info ! <\html> I have to clear the cache in order to take effect : http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml open http://www.x.com/1.shtml the content is : hello world1 ! new, i am include info ! <\html> open http://www.x.com/2.shtml the content is : hello world2 ! new, i am include info ! <\html> If I have a lot shtml files ,Must be more clear the cache :( http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml http://www.x.com/purge/3.shtml http://www.x.com/purge/4.shtml http://www.x.com/purge/5.shtml http://www.x.com/purge/6.shtml .......more I just need to clear a cache: http://x.com/purge/include/foot.html , all SHTML pages to take effect. Is there a way to solve? Thanks in advance :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243085,243089#msg-243089 From nginx-forum at nginx.us Wed Sep 25 12:28:46 2013 From: nginx-forum at nginx.us (foxbin) Date: Wed, 25 Sep 2013 08:28:46 -0400 Subject: [Ask for help] Questions about proxy_cache and ssi In-Reply-To: <44f9267b18b77771783c0399c379b0e4.NginxMailingListEnglish@forum.nginx.org> References: <44f9267b18b77771783c0399c379b0e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: hi,mex : this is my nginx config: user nobody; worker_rlimit_nofile 65500; worker_processes 8; error_log logs/error.log; events { use epoll; worker_connections 65500; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip off; client_max_body_size 1g; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ssi on; ssi_silent_errors on; ssi_types text/shtml; index index.shtml index.html index.htm; access_log logs/access.log ; proxy_temp_path /cache/proxy_temp; proxy_cache_path /cache/proxy_cache levels=1:2 keys_zone=tmp_cache:2000m inactive=10000d max_size=200G; proxy_next_upstream error timeout invalid_header http_504 http_500 http_502 http_503; upstream backend { server 192.168.1.100; server 192.168.1.101; server 192.168.1.102 backup; } server { server_name www.x.com ; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://backend; proxy_cache tmp_cache; proxy_cache_key $host$uri; proxy_cache_valid 1000d; } location ~ /purge(/.*) { allow 192.168.1.254; deny all; proxy_cache_purge tmp_cache $host$1$is_args$args; } } shtml files: www.x.com/1.shtml www.x.com/2.shtml www.x.com/3.shtml www.x.com/4.shtml www.x.com/5.shtml www.x.com/6.shtml ......more include files: http://www.x.com/purge/include/foot.html old, i am include info ! Edit this file: http://www.x.com/include/include/foot.html new, i am include info ! <\html> I use ngx_cache_purge to clear the cache : http://www.x.com/purge/include/foot.html But do not take effect cache these pages : open http://www.x.com/1.shtml the content is : old, i am include info ! <\html> open http://www.x.com/2.shtml the content is : old, i am include info ! <\html> I have to clear the cache in order to take effect : http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml open http://www.x.com/1.shtml the content is : hello world1 ! new, i am include info ! <\html> open http://www.x.com/2.shtml the content is : hello world2 ! new, i am include info ! <\html> If I have a lot shtml files ,Must be more clear the cache :( http://www.x.com/purge/1.shtml http://www.x.com/purge/2.shtml http://www.x.com/purge/3.shtml http://www.x.com/purge/4.shtml http://www.x.com/purge/5.shtml http://www.x.com/purge/6.shtml .......more I just need to clear a cache: http://x.com/purge/include/foot.html , all SHTML pages to take effect. Is there a way to solve? Thanks in advance :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243085,243090#msg-243090 From nginx-forum at nginx.us Wed Sep 25 12:29:12 2013 From: nginx-forum at nginx.us (liyucmh) Date: Wed, 25 Sep 2013 08:29:12 -0400 Subject: from time to time we can't reload the nginx service and get this error... /var/run/nginx.pid: No such file or directory In-Reply-To: <77432c470909101523p7c336be2r87c3e29b0d4b5b14@mail.gmail.com> References: <77432c470909101523p7c336be2r87c3e29b0d4b5b14@mail.gmail.com> Message-ID: Pls help I have the problem, how to do? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,6380,243049#msg-243049 From contact at jpluscplusm.com Wed Sep 25 12:34:37 2013 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 25 Sep 2013 13:34:37 +0100 Subject: Nginx static files In-Reply-To: <85906fc29cfb2cb534247816b646358d@ruby-forum.com> References: <85906fc29cfb2cb534247816b646358d@ruby-forum.com> Message-ID: On 25 Sep 2013 09:18, "pacolotero pacolotero" wrote: > > I have a rails server and I would like to serve all this static files *. > jpg, *. png, *. css, *. js, *. gif, *. jpeg with nginx > > This is my actual configuration You appear to have forgotten to ask a question! I mean, I assume you've already tried some things out and aren't just expecting someone to write the new config for you ... ;-) Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 25 14:04:33 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Sep 2013 18:04:33 +0400 Subject: 40 bad request and UTF8 In-Reply-To: <093ce4d330e94cd828a6e425eae32393.NginxMailingListEnglish@forum.nginx.org> References: <093ce4d330e94cd828a6e425eae32393.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20130925140433.GX2170@mdounin.ru> Hello! On Wed, Sep 25, 2013 at 08:13:11AM -0400, optimum.dulopin wrote: > Hi, > > Im using nginx and rails for my site which contains url with georgian > letters ie ???????????? so something like > http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98 > It is mainly working perfectly but sometimes I receive request with > truncated url ie > 1 - > http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%9 > (as u can see it should be something after %9) > or > 2 - > http://gancxadebebi.ge/ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83%9 > > I succeeded to deal when there is no get parameters (first url above) and > make in that case a redirection to / Hmm, I tend to think it's a bug that (1) doesn't generate 400 Bad Request. It should. > when this happen, this line is added to nginx error.log > 2013/09/24 00:46:53 [alert] 63547#0: *19359227 pcre_exec() failed: -10 on > "/ka/????????????" using "", client: aa.bb.cc.dd, server: gancxadebebi.ge, > request: "GET > /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%8 > HTTP/1.1", host: "gancxadebebi.ge" The -10 from pcre_exec() is PCRE_ERROR_BADUTF8, it shouldn't happen unless you've explicitly used "(*UTF8)" in your PCRE patterns. It's very strange you see it with the config provided. > but for second url, which have get parameter truncated, I can not handle > that which generate a 400 bad request page. > such request added this line in nginx access.log > aa.bb.cc.dd - - [24/Sep/2013:00:48:47 +0200] "GET > /ka/%E1%83%92%E1%83%90%E1%83%9C%E1%83%AA%E1%83%AE%E1%83%90%E1%83%93%E1%83%94%E1%83%91%E1%83%94%E1%83%91%E1%83%98?mc=mini+aipadi&search=%E1%83%AB%E1%83%98%E1%83%94%E1%83%91%E1%83% > HTTP/1.1" 400 5 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36" > > does this mean that nginx accepted the request and then rails coudnt resolve > it ? By itself nginx doesn't try to urldecode request arguments (in contrast to URI path, which is urldecoded for location matching), and because of this it doesn't try to detect encoding violations in request arguments. That is, most likely you are right and the error comes from your backend. You may try intercepting errors using proxy_intercept_errors, but actually I wouldn't recommend doing it. Configuring an error_page for 400 Bad Request isn't a good idea, it might hurt. -- Maxim Dounin http://nginx.org/en/donation.html p.s. Please don't duplicate the same question to the same mailing list via multiple forum-like interfaces. It's still the same mailing list. Thank you for cooperation. From gchodos at gmail.com Wed Sep 25 14:57:42 2013 From: gchodos at gmail.com (Gary Chodos) Date: Wed, 25 Sep 2013 10:57:42 -0400 Subject: Proxy to upstream HTTPS server *without* any keys/certs in nginx In-Reply-To: References: Message-ID: On Tuesday, September 24, 2013, Jonathan Matthews wrote: > On 24 Sep 2013 18:55, "Gary Chodos" 'cvml', 'gchodos at gmail.com');>> wrote: > > > > Hello, > > > > We are researching which tools would allow us to do what is described in > the subject. > > > > After searching the archives here and in other places like > stackoverflow, there seems to be conflicting info on whether this is > possible. Perhaps it was not doable early in nginx's life but is now? > Based on the below link (which notes the upstream and reverse proxy > modules), can we now have nginx listen on 443, and pass browser requests to > it on to an upstream HTTPS server which actually serves content, has the > certs/keys and takes care of SSL handshake etc? > > I don't believe so, no. > > > In our use case we cannot house any keys/certs on the nginx box so > must proxy everything (including SSL) to the upstream https box, as if the > end user (who makes the request from the browser) hit the upstream server > directly, and doesn't have any missing or mismatching certificate errors. > > It sounds like you just need a TCP-layer proxy. I suggest HAProxy in TCP > mode. > Bingo! This works perfectly. Thanks. Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Sep 25 17:43:36 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 25 Sep 2013 13:43:36 -0400 Subject: Transforming nginx for Windows In-Reply-To: References: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> <020a1514ec8b1fe773557616b307c2a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: 13:46 25-9-2013: nginx 1.5.6.3 Alice Based on nginx 1.5.6 (25-9-2013) with; + Bug fixes in lua-nginx-module(master 25-9-2013) and ngx_devel_kit(master 25-9-2013) by agentzh + Both debug and non-debug versions, the non-debug version is production use ready ! * vcredist_x86 is required, get it here (http://www.microsoft.com/en-us/download/details.aspx?id=5555) * Additional specifications are like 10:37 23-9-2013: nginx 1.5.6.1 Alice * 1.5.6.2 was skipped for public release Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,243143#msg-243143 From emailgrant at gmail.com Wed Sep 25 18:14:00 2013 From: emailgrant at gmail.com (Grant) Date: Wed, 25 Sep 2013 11:14:00 -0700 Subject: root works, alias doesn't Message-ID: Can anyone tell me why this works: root /var/www/localhost/htdocs; location / { root /var/www/localhost/htdocs/webalizer/; } And this doesn't: root /var/www/localhost/htdocs; location / { alias /webalizer/; } I get: "/webalizer/index.html" is not found (2: No such file or directory) /var/www/localhost/htdocs/webalizer/index.html does exist. - Grant From emailgrant at gmail.com Wed Sep 25 18:16:53 2013 From: emailgrant at gmail.com (Grant) Date: Wed, 25 Sep 2013 11:16:53 -0700 Subject: nginx + munin + CGI Message-ID: There is a special set of configuration parameters for apache which allow it to work with munin in CGI mode. Has anyone tried getting it to work with nginx? - Grant From reallfqq-nginx at yahoo.fr Wed Sep 25 18:25:07 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 25 Sep 2013 14:25:07 -0400 Subject: root works, alias doesn't In-Reply-To: References: Message-ID: Absolute vs Relative paths. The log file line says it all: '/webalizer/index.html' doesn't exist, which is not the path of the file you wanna serve... Take a look at the following examples showing how 'location' address is replaced or completed (depending on absolute or relative 'alias' directive) by 'alias' path: http://nginx.org/en/docs/http/ngx_http_core_module.html#alias http://stackoverflow.com/questions/10084137/nginx-aliaslocation-directive PEBCAD? ;o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Sep 25 19:47:35 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 25 Sep 2013 12:47:35 -0700 Subject: Transforming nginx for Windows In-Reply-To: References: <4A0E8AB4-DF66-4870-83A7-66BBED7E54BA@nginx.com> <020a1514ec8b1fe773557616b307c2a2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Sep 25, 2013 at 10:43 AM, itpp2012 wrote: > Based on nginx 1.5.6 (25-9-2013) with; > + Bug fixes in lua-nginx-module(master 25-9-2013) and ngx_devel_kit(master > 25-9-2013) by agentzh First of all, thank you for the work on Windows! :) It'll be great if you can try running ngx_lua (and other nginx modules') test suite against your Windows build on Windows. The test scaffold is written in Perl which *may* run on Windows out of the box :) Thanks! -agentzh From nginx-forum at nginx.us Wed Sep 25 21:02:03 2013 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 25 Sep 2013 17:02:03 -0400 Subject: Transforming nginx for Windows In-Reply-To: References: Message-ID: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> > First of all, thank you for the work on Windows! :) We're getting there slowly! :) I've got 2 workers working, just need to figure out why :) > It'll be great if you can try running ngx_lua (and other nginx > modules') test suite against your Windows build on Windows. The test > scaffold is written in Perl which *may* run on Windows out of the box Url of this scaffold? I can get anything running so this is going to be an interesting test. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,243149#msg-243149 From agentzh at gmail.com Wed Sep 25 21:56:50 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 25 Sep 2013 14:56:50 -0700 Subject: Transforming nginx for Windows In-Reply-To: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> References: <8dd86ef0cc3b8265cb643278f93ca08a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Sep 25, 2013 at 2:02 PM, itpp2012 wrote: > > Url of this scaffold? I can get anything running so this is going to be an > interesting test. > See the official documentation for the ngx_lua test suite: http://wiki.nginx.org/HttpLuaModule#Test_Suite Good luck! -agentzh From jeroenooms at gmail.com Thu Sep 26 00:41:44 2013 From: jeroenooms at gmail.com (Jeroen Ooms) Date: Wed, 25 Sep 2013 20:41:44 -0400 Subject: request body and client_body_buffer_size In-Reply-To: References: Message-ID: One more question regarding this: > The > proxy_no_cache $request_body_file; > should do the trick, see http://nginx.org/r/proxy_no_cache. I tried this and get a warning: nginx: [warn] "proxy_no_cache" functionality has been changed in 0.8.46, now it should be used together with "proxy_cache_bypass" Do I just need to add an additional line: proxy_cache_bypass $request_body_file; It is not clear to me how proxy_cache_bypass is different from proxy_no_cache. On Fri, Sep 13, 2013 at 8:56 PM, Jeroen Ooms wrote: > Is it correct that when $content_length > client_body_buffer_size, > then $request_body == "" ? If so this would be worth documenting at > request_body. > > I am using: > > proxy_cache_methods POST; > proxy_cache_key "$request_method$request_uri$request_body"; > > Which works for small requests, but for large requests clients got > very strange results due to $request_body being empty and hence > getting false cache hits for completely different form posts. > > Is there something available like $body_hash that can be used as a > caching key even for large request bodies? Or alternatively, how > would I configure nginx to not cache requests when content_length > is larger than client_body_buffer_size? From nginx-forum at nginx.us Thu Sep 26 06:17:02 2013 From: nginx-forum at nginx.us (wsseoer) Date: Thu, 26 Sep 2013 02:17:02 -0400 Subject: Medical Billing Software Automates the Process of Billing Message-ID: <8dfbbb54d74df6a9dd0d2d168b6bbb04.NginxMailingListEnglish@forum.nginx.org> Medical billing software has been invented very recently. This software is mainly use in making medical bills. The preparation of the medical bill has now become very easy with the help of this billing technology. It is very speedy and can prepare the statement within a very short period of time. Thus, this software proves to be very effective. This medical statement is a very important part of getting medical billing service. This service is really very important for those people who are financially not strong enough to carry their treatment in a proper way. To help these poor people this billing service is really very important. The poor people are getting financial help from this tool and they can continue further treatment without any hassles. But to make this statement, it is not an easy task. It requires complete accuracy. The medical statement includes all important details about the patients like patient???s name, address, the name of the disease from which the patient is suffering from, the name of the doctor under whom the patient is doing his or her treatment etc. All these are some of the details which are mentioned in this statement. Medical bill is the most important part of billing software. The statement generated should be 100% accurate. Previously the preparation of this invoice used to take lots of time. But, now its preparation has been made easier with the help of this billing software. There are various types of billing software available in the market. Among them lytec and NueMD are very popular. These two forms of software are mostly used by the medical department for preparing the bill. This technology has lessened the burden of the medical department of preparing the bill. [url=http://www.intcomedical.com/products/HotColdTherapy/Warmer/94.html]heat patch[/url] [url=http://www.intcomedical.com/]vinyl gloves[/url] [url=http://www.intcomedical.com/products/DurableMedical/BathingSafety/87.html]Bath chair[/url] [url=http://www.intcomedical.com/products/ECGAccessories/]ECG electrode[/url] This medical billing service has been proved very helpful for the poor people. The poor people are getting much help from this service. This billing software not only helps the medical department but also helps in insurance company in preparing the bill. For keeping the records of the patients the insurance company use this software tool. To get more details about this medical billing software you can take the help of internet. There are many web sites from where you can get detail information about this billing software and its uses. The medical department can now prepare the bill within a very short period of time and all important details about the patients are mentioned in the medical bill. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243157,243157#msg-243157 From m.desantis at morganspa.com Thu Sep 26 07:20:32 2013 From: m.desantis at morganspa.com (Maurizio De Santis) Date: Thu, 26 Sep 2013 09:20:32 +0200 Subject: Declare a block so as to be shared between locations Message-ID: <5243E040.3000607@morganspa.com> I have two locations, /a and /b . Both of them share these directives expires max; add_header Cache-Control public; add_header ETag ""; break; and location /b to has gzip_static on too. Is there a way to write this without writing two times the common directives? That is, without rewriting the common directives like this: location /a { expires max; add_header Cache-Control public; add_header ETag ""; break; } location /b { gzip_static on; expires max; add_header Cache-Control public; add_header ETag ""; break; } Generally speaking, is there a way to declare a block so as to be shared between two or more locations without rewriting the common directives? Thank you -- Maurizio De Santis -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Thu Sep 26 07:23:35 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 26 Sep 2013 19:23:35 +1200 Subject: Declare a block so as to be shared between locations In-Reply-To: <5243E040.3000607@morganspa.com> References: <5243E040.3000607@morganspa.com> Message-ID: <5243E0F7.3030206@greengecko.co.nz> On 26/09/13 19:20, Maurizio De Santis wrote: > I have two locations, /a and /b . Both of them share these directives > > expires max; > add_header Cache-Control public; > add_header ETag ""; > break; > > and location /b to has gzip_static on too. > > Is there a way to write this without writing two times the common > directives? > That is, without rewriting the common directives like this: > > location /a { > expires max; > add_header Cache-Control public; > add_header ETag ""; > break; > } > > location /b { > gzip_static on; > expires max; > add_header Cache-Control public; > add_header ETag ""; > break; > } > > Generally speaking, is there a way to declare a block so as to be > shared between two or more locations without rewriting the common > directives? > > Thank you > > -- > > Maurizio De Santis > include them from a remote file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Sep 26 08:02:19 2013 From: nginx-forum at nginx.us (mex) Date: Thu, 26 Sep 2013 04:02:19 -0400 Subject: Declare a block so as to be shared between locations In-Reply-To: <5243E040.3000607@morganspa.com> References: <5243E040.3000607@morganspa.com> Message-ID: Hi, what you are looking for is the "include" - statement, see here: http://nginx.org/en/docs/ngx_core_module.html#include regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243159,243161#msg-243161 From nginx-forum at nginx.us Thu Sep 26 10:42:44 2013 From: nginx-forum at nginx.us (revirii) Date: Thu, 26 Sep 2013 06:42:44 -0400 Subject: upstream+ip_hash: hash valid global? Message-ID: Hello, i have 2 upstreams, each with 3 backend servers, where backendA is the same backend in both upstreams. upstream one { server backendA; server backendB; server backendC; } upstream two { server backendA; server backendD; server backendE; } A user with his IP sends a request, gets passed to upstream one and is sent to backendA. Shortly after that he sends a different request and gets passed to upstream two - will he be sent to backendA as well? So the question is: is ip_hash global in nginx, i.e. a user always is sent to the same backend (if available), independent from an upstream? Or is ip_hash upstream-specific, i.e. nginx hashes per upstream? thx in advance revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243162,243162#msg-243162 From mdounin at mdounin.ru Thu Sep 26 12:37:24 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 Sep 2013 16:37:24 +0400 Subject: request body and client_body_buffer_size In-Reply-To: References: Message-ID: <20130926123724.GC2271@mdounin.ru> Hello! On Wed, Sep 25, 2013 at 08:41:44PM -0400, Jeroen Ooms wrote: > One more question regarding this: > > > The > > proxy_no_cache $request_body_file; > > should do the trick, see http://nginx.org/r/proxy_no_cache. > > I tried this and get a warning: > nginx: [warn] "proxy_no_cache" functionality has been changed in > 0.8.46, now it should be used together with "proxy_cache_bypass" > > Do I just need to add an additional line: > proxy_cache_bypass $request_body_file; > > It is not clear to me how proxy_cache_bypass is different from proxy_no_cache. The "proxy_no_cache" directive prevents caching of a response got from an upstream (i.e., do not save to a cache). In contrast, the "proxy_cache_bypass" directive prevents returing a response from a cache if it's there. The warning in question is an artifact from 0.8.x time when proxy_no_cache used to mean both "no cache" and "bypass". It probably should be removed. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Thu Sep 26 14:27:53 2013 From: nginx-forum at nginx.us (iwitham) Date: Thu, 26 Sep 2013 10:27:53 -0400 Subject: Nginx-1.4.1 fails when adding nginx_upload_module-2.2.0 Message-ID: I am attempting to configure ngixn-1.4.1 on a SLES box for a galaxy server deployment. I am able to get it to work fine as long as I do not try to --add-module=../nginx_upload_module-2.2.0. When I add the upload_module to the configuration it fails as shown below: ../nginx_upload_module-2.2.0/ngx_http_upload_module.c ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function ?ngx_http_read_upload_client_request_body?: ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2628: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2687: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function ?ngx_http_do_read_upload_client_request_body?: ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2769: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2785: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2877: error: ?ngx_http_request_body_t? has no member named ?to_write? make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o] Error 1 make[1]: Leaving directory `/root/nginx-1.4.1' make: *** [build] Error 2 My configuration script is: ./configure --add-module=/var/tmp/ldap/nginx-auth-ldap-master --add-module=../nginx_upload_module-2.2.0 --with-pcre=../pcre-8.33 --with-http_ssl_module --conf-path=/usr/local/nginx/conf/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=galaxy --group=galaxy --with-debug Is there a known issue with this version of nginx_upload_module? Is there a work around? Thanks, Iry Witham Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243166,243166#msg-243166 From cyril.lavier at davromaniak.eu Thu Sep 26 14:52:22 2013 From: cyril.lavier at davromaniak.eu (Cyril Lavier) Date: Thu, 26 Sep 2013 16:52:22 +0200 Subject: Nginx-1.4.1 fails when adding nginx_upload_module-2.2.0 In-Reply-To: References: Message-ID: <52444A26.6080201@davromaniak.eu> On 09/26/2013 04:27 PM, iwitham wrote: > I am attempting to configure ngixn-1.4.1 on a SLES box for a galaxy server > deployment. I am able to get it to work fine as long as I do not try to > --add-module=../nginx_upload_module-2.2.0. When I add the upload_module to > the configuration it fails as shown below: > > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function > ?ngx_http_read_upload_client_request_body?: > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2628: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2687: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function > ?ngx_http_do_read_upload_client_request_body?: > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2769: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2785: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2877: error: > ?ngx_http_request_body_t? has no member named ?to_write? > make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o] > Error 1 > make[1]: Leaving directory `/root/nginx-1.4.1' > make: *** [build] Error 2 > > My configuration script is: > > ./configure --add-module=/var/tmp/ldap/nginx-auth-ldap-master > --add-module=../nginx_upload_module-2.2.0 --with-pcre=../pcre-8.33 > --with-http_ssl_module --conf-path=/usr/local/nginx/conf/nginx.conf > --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --user=galaxy --group=galaxy > --with-debug > > Is there a known issue with this version of nginx_upload_module? Is there a > work around? > > Thanks, > > Iry Witham > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243166,243166#msg-243166 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hello Iwitham. For nginx starting 1.3.9, it's a known problem. I produced a patch (a kludge then corrected by another person), and the github issue is here : https://github.com/vkholodkov/nginx-upload-module/issues/41 The modified patch seems to work with nginx 1.5.1. Thanks. -- Cyril "Davromaniak" Lavier KeyID 59E9A881 http://www.davromaniak.eu From nginx-forum at nginx.us Thu Sep 26 17:10:37 2013 From: nginx-forum at nginx.us (iwitham) Date: Thu, 26 Sep 2013 13:10:37 -0400 Subject: Nginx-1.4.1 fails when adding nginx_upload_module-2.2.0 In-Reply-To: <52444A26.6080201@davromaniak.eu> References: <52444A26.6080201@davromaniak.eu> Message-ID: <21a083dfe40edc23f74e91e58f08cf8d.NginxMailingListEnglish@forum.nginx.org> Hello Davromaniak Thanks for pointing me to this patch. I have made the needed changes and was able to compile it just fine. Iry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243166,243169#msg-243169 From nginx-forum at nginx.us Thu Sep 26 18:09:46 2013 From: nginx-forum at nginx.us (tempspace) Date: Thu, 26 Sep 2013 14:09:46 -0400 Subject: Is there an nginx queue that isn't logged via $response_time Message-ID: We have a setup that looks like this: nginx->haproxy->app servers We are terminating SSL with nginx and it sits in front of everything. During our peak load times, we are experiencing about a 2x performance hit. Requests that would normally take 400 ms are taking 800ms. It's taking longer for the entire Internet. The problem is, I have absolutely no sign of any slowdowns in my logs and graphs. New Relic shows all the app servers are responding correctly with no change in speed. Nginx and haproxy show nothing in their logs about requests slowing down, but we are slowing down at end users. Despite nginx showing that a particular request I tracked is taking 17ms ($response_time) through the entire stack, it took 1.5 seconds to curl it during peak load last week. So, that leaves me with two options: 1) Network issues - I have more than enough pipe left according to graphs from the router. I'm only using 400 Mbps out of the 1 Gbps port and there are no errors in ifconfig or on the switch or routers. However, SoftLayer manages this gear, so I can't verify this personally. 2) nginx is holding up the request and either not logging it or I'm not logging the right thing. Is it possible that requests are being queued up because workers are busier and they're not getting acted on as quickly? If this is in fact happening, what can I log in nginx other than $response_time, since that is showing no slowdown at all. And, if this is possible that requests are actually taking longer than $response_time is indicating, how do I go about tweaking the config to speed things up? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243170,243170#msg-243170 From nginx-forum at nginx.us Thu Sep 26 18:14:47 2013 From: nginx-forum at nginx.us (tempspace) Date: Thu, 26 Sep 2013 14:14:47 -0400 Subject: Is there an nginx queue that isn't logged via $response_time In-Reply-To: References: Message-ID: <5a5a819b8fdd016209c8247edca622b9.NginxMailingListEnglish@forum.nginx.org> Sorry, I meant $request_time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243170,243171#msg-243171 From nginx-forum at nginx.us Thu Sep 26 19:41:40 2013 From: nginx-forum at nginx.us (tempspace) Date: Thu, 26 Sep 2013 15:41:40 -0400 Subject: Is there an nginx queue that isn't logged via $response_time In-Reply-To: References: Message-ID: In case it helps, here at my sysctl and applicable nginx config values Sysctl net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_synack_retries = 2 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_fin_timeout = 3 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 16777216 16777216 16777216 net.ipv4.tcp_wmem = 16777216 16777216 16777216 net.ipv4.tcp_max_tw_buckets = 16777216 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_syn_backlog = 262144 net.core.somaxconn = 262144 net.core.netdev_max_backlog = 15000 net.core.netdev_budget = 8196 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.ip_nonlocal_bind = 1 Applicable nginx configuration user www-data; worker_processes 20; worker_rlimit_nofile 500000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { use epoll; multi_accept off; accept_mutex off; worker_connections 65536; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243170,243173#msg-243173 From phil at pricom.com.au Thu Sep 26 20:35:44 2013 From: phil at pricom.com.au (Philip Rhoades) Date: Fri, 27 Sep 2013 06:35:44 +1000 Subject: Fedora19 + Nginx - Passenger module not compiled in? Message-ID: <6235146fa07e9495b2a20a1589afbe3a@localhost> People, I am trying out nginx-1.4.2-1.fc19.x86_64 and have installed: mod_passenger-3.0.21-4.fc19.x86_64 rubygem-passenger-3.0.21-4.fc19.x86_64 rubygem-passenger-native-3.0.21-4.fc19.x86_64 rubygem-passenger-native-libs-3.0.21-4.fc19.x86_64 rubygem-passenger-devel-3.0.21-4.fc19.x86_64 but when trying to start nginx I get: nginx: [emerg] unknown directive "passenger_enabled" in /etc/nginx/nginx.conf:54 - which is apparently indicative of the Passenger module not being compiled into Nginx - is there an RPM somewhere that does have it compiled in? - I couldn't find it mentioned anywhere . . Thanks, Phil. -- Philip Rhoades GPO Box 3411 Sydney NSW 2001 Australia E-mail: phil at pricom.com.au From etyrer at york.cuny.edu Thu Sep 26 22:34:46 2013 From: etyrer at york.cuny.edu (Eric Tyrer) Date: Thu, 26 Sep 2013 22:34:46 +0000 Subject: Stumped with issue of Nginx passing requests to php-fpm while using SSL Message-ID: Problem i have is that after attempting to login to wordpress over SSL php is not being processed/executed. I've got a Wordpress 3.5 multi-site using subdirectories. nginx version: nginx/1.4.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' PHP 5.3.27 (fpm-fcgi) (built: Jul 12 2013 10:10:59) nginx handles requests fine through port 80 images, css, etc are processed normally. however, logging into to a blog results in no php being processed like ?. > /** > * Front to the WordPress application. This file doesn't do anything, but loads > * wp-blog-header.php which does and tells WordPress to load the theme. > * > * @package WordPress > */ > > /** > * Tells WordPress to load the WordPress theme and output it. > * > * @var bool > */ > define('WP_USE_THEMES', true); > > /** Loads the WordPress Environment and Template */ > require('./wp-blog-header.php'); my host.conf is here http://pastie.org/private/lmr05yxem5psyemzbwukig and my nginx conf is here http://pastie.org/private/9gwjgvslwspg17frbus3g i am at my wits end trying to figure it out by myself.. It would be great if another pair of eyes could look this over --------- Eric S. Tyrer II Associate Director ? Web Systems York College - The City University of New York http://www.york.cuny.edu 94-20 Guy R. Brewer Blvd. Academic Core - STE 1H14 Jamaica, NY 11451 http://www.york.cuny.edu/etyrer etyrer at york.cuny.edu (P) 718-262-2466 (C) 347-393-6507 I have no special talent. I am only passionately curious. - Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2330 bytes Desc: not available URL: From steve at greengecko.co.nz Thu Sep 26 22:45:34 2013 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 27 Sep 2013 10:45:34 +1200 Subject: Stumped with issue of Nginx passing requests to php-fpm while using SSL In-Reply-To: References: Message-ID: <1380235534.11141.224.camel@steve-new> OK, the problem is that you're listening on *http* on port 443. You need to use listen 443 ssl [default]; for ssl. As an aside, you can combine the http: and https: configs, using the two listen statements, and dropping the 'ssl on' in a single block. Makes admin simpler... havn't checked but the two look pretty similar. hth, Steve On Thu, 2013-09-26 at 22:34 +0000, Eric Tyrer wrote: > Problem i have is that after attempting to login to wordpress over SSL > php is not being processed/executed. > > > I've got a Wordpress 3.5 multi-site using subdirectories. > > > nginx version: nginx/1.4.2 > > > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > > > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_stub_status_module --with-mail --with-mail_ssl_module > --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > > PHP 5.3.27 (fpm-fcgi) (built: Jul 12 2013 10:10:59) > > > nginx handles requests fine through port 80 images, css, etc are > processed normally. however, logging into to a blog results in no php > being processed like ?. > > > > > /** > > * Front to the WordPress application. This file doesn't do anything, but loads > > * wp-blog-header.php which does and tells WordPress to load the theme. > > * > > * @package WordPress > > */ > > > > /** > > * Tells WordPress to load the WordPress theme and output it. > > * > > * @var bool > > */ > > define('WP_USE_THEMES', true); > > > > /** Loads the WordPress Environment and Template */ > > require('./wp-blog-header.php'); > > > my host.conf is here http://pastie.org/private/lmr05yxem5psyemzbwukig > > > and my nginx conf is > here http://pastie.org/private/9gwjgvslwspg17frbus3g > > > > > i am at my wits end trying to figure it out by myself.. It would be > great if another pair of eyes could look this over > > --------- > Eric S. Tyrer II > Associate Director ? Web Systems > > York College - The City University of New York > http://www.york.cuny.edu > 94-20 Guy R. Brewer Blvd. > Academic Core - STE 1H14 > Jamaica, NY 11451 > > http://www.york.cuny.edu/etyrer > etyrer at york.cuny.edu > > (P) 718-262-2466 > (C) 347-393-6507 > > I have no special talent. I am only passionately curious. - Albert > Einstein > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Thu Sep 26 23:59:04 2013 From: nginx-forum at nginx.us (bryndole) Date: Thu, 26 Sep 2013 19:59:04 -0400 Subject: Access to live limit_conn values? In-Reply-To: <88ab5623db244ef8b1d01c61d26ebd97.NginxMailingListEnglish@forum.nginx.org> References: <88ab5623db244ef8b1d01c61d26ebd97.NginxMailingListEnglish@forum.nginx.org> Message-ID: <106cb398edeb39ce1964dc6afe90aa81.NginxMailingListEnglish@forum.nginx.org> Our own investigation indicates that, no, there is not way to directly access the values in the limit_conn zones from "user space." We created a new status page that reports all the existing zones and the current connection counts for each, as well as what the current limits are. It does this by walking the config data structure and building a list of the zones and then creating a simple report page. -Bryn Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242973,243178#msg-243178 From savages at mozapps.com Fri Sep 27 11:22:25 2013 From: savages at mozapps.com (Shaun Savage) Date: Fri, 27 Sep 2013 04:22:25 -0700 Subject: configuration of 1000's entries In-Reply-To: References: Message-ID: <52456A71.8090701@mozapps.com> I have many "virtual" paths on one nginx server. What I mean by this is there can be many top level paths, where each one has a cookie, static files, and a upstream server. The way I am doing it now is just duplicate every path. Is there a way to do this 'faster' 'less writing' 'better'? I am expecting 1000's of entries. upstream a1 { # server localhost:48000; unix:/tmp/a1 } upstream a2 { server localhost:48001; # unix:/tmp/a2 } upstream a3 { # server localhost:48002; unix:/tmp/a3 } location /a1/exe { # to upstream if have cookie if ($http_cookie !~* 'a1') { rewrite ^a1(.*)$ /login?a1=$1; } proxy_.... #setup proxy_pass http://a1; } location /a1 { # static files if ($http_cookie !~* 'a1') { rewrite ^a1(.*)$ /login?a1=$1; } alias /var/www/a1 } location /a2/exe { # to upstream if have cookie if ($http_cookie !~* 'a2') { rewrite ^a2(.*)$ /login?a2=$1; } proxy_.... #setup proxy_pass http://a2; } location /a2 { #static files if ($http_cookie !~* 'a2') { rewrite ^a2(.*)$ /login?a2=$1; } alias /var/www/a2 } ....... IDEA ************* not correct!! location / { parse url check cookie if $2 == 'exe' { proxy_pass } alias /var/www/$2 OR rewrite /var/www/$2 } From mdounin at mdounin.ru Fri Sep 27 11:36:01 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 15:36:01 +0400 Subject: Stumped with issue of Nginx passing requests to php-fpm while using SSL In-Reply-To: References: Message-ID: <20130927113601.GK2271@mdounin.ru> Hello! On Thu, Sep 26, 2013 at 10:34:46PM +0000, Eric Tyrer wrote: > Problem i have is that after attempting to login to wordpress over SSL php is not being processed/executed. > > I've got a Wordpress 3.5 multi-site using subdirectories. [...] > nginx handles requests fine through port 80 images, css, etc are > processed normally. however, logging into to a blog results in > no php being processed like ?. > > > my host.conf is here http://pastie.org/private/lmr05yxem5psyemzbwukig > > and my nginx conf is here http://pastie.org/private/9gwjgvslwspg17frbus3g > > > i am at my wits end trying to figure it out by myself.. It would > be great if another pair of eyes could look this over It looks like you only have *.php handling configured inside "location ~ /wp-(admin|login)" in your ssl server{} block. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Fri Sep 27 11:41:13 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 15:41:13 +0400 Subject: Stumped with issue of Nginx passing requests to php-fpm while using SSL In-Reply-To: <1380235534.11141.224.camel@steve-new> References: <1380235534.11141.224.camel@steve-new> Message-ID: <20130927114113.GL2271@mdounin.ru> Hello! On Fri, Sep 27, 2013 at 10:45:34AM +1200, Steve Holdoway wrote: > OK, the problem is that you're listening on *http* on port 443. You need > to use > > listen 443 ssl [default]; > > for ssl. The config posted uses "ssl on", which is a valid way to configure ssl on all sockets used in a server{} block, see http://nginx.org/r/ssl. While it is recommended to use "listen ... ssl" instead as it allows to combine ssl and non-ssl listens in one server{} block, it's not something required. -- Maxim Dounin http://nginx.org/en/donation.html From dmitry.shurupov at flant.ru Fri Sep 27 12:20:51 2013 From: dmitry.shurupov at flant.ru (Dmitry Shurupov) Date: Fri, 27 Sep 2013 16:20:51 +0400 Subject: [ANNOUNCE] nginx-http-rdns module Message-ID: <52457823.5060508@flant.ru> Hi guys! Our small Russian company (flant.com) is pleased to announce new module called nginx-http-rdns. As you can see from its name, the main feature of this module is making reverse DNS lookups to get DNS name of client's IP address. Features: * usual rDNS lookup (saving result in $rdns_hostname variable) [rdns on]; * "double lookup": rDNS to get the name and DNS to get the IP address back [rdns double]; * rewrite module support ("rdns" can be used inside "if" blocks); * simple access control to allow/deny connections from given DNS names [rdns_allow & rdns_deny]. We use this module in production for a long time so it should be pretty stable. Source code, docs (and your forks? :-)) are at GitHub: https://github.com/flant/nginx-http-rdns The documentation based on our README file is here: http://wiki.nginx.org/HttpRdnsModule P.S. We have Russian documentation as well: http://flant.ru/projects/nginx-http-rdns -- Dmitry Shurupov, CSJC Flant http://flant.com/ +7 (495) 721-10-27 From nginx-forum at nginx.us Fri Sep 27 13:16:39 2013 From: nginx-forum at nginx.us (mex) Date: Fri, 27 Sep 2013 09:16:39 -0400 Subject: Overhead when enabling debug? Message-ID: i have a question regarding the --with-debug - option; do i have to expect much overhead, when compiling nginx with that option, but have it disabled per default? i'd like to have it at hand for debugging certain issues every then and there. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243201,243201#msg-243201 From nginx-forum at nginx.us Fri Sep 27 13:42:30 2013 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Sep 2013 09:42:30 -0400 Subject: Overhead when enabling debug? In-Reply-To: References: Message-ID: <3739e8ab52a0334abaa791666bc40461.NginxMailingListEnglish@forum.nginx.org> >From what I've seen here there is a small <5% overhead when compiled with debug and 5-10% while debug logging is active. I want my 5% :) so I make 2 versions. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243201,243202#msg-243202 From mdounin at mdounin.ru Fri Sep 27 14:19:26 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Sep 2013 18:19:26 +0400 Subject: Overhead when enabling debug? In-Reply-To: References: Message-ID: <20130927141926.GM2271@mdounin.ru> Hello! On Fri, Sep 27, 2013 at 09:16:39AM -0400, mex wrote: > i have a question regarding the --with-debug - option; do i have to expect > much overhead, > when compiling nginx with that option, but have it disabled per default? > > i'd like to have it at hand for debugging certain issues every then and > there. While there is some overhead, it's actually nontrivial to show it. E.g. here are http_load results of a "return 204" test on my laptop (5 tests for each --with-debug and without debug, 10 seconds each test), analyzed by ministat(1): x bench.debug + bench.nodebug +------------------------------------------------------------------------------+ |x xx + + + + x x +| | |__________|___M___A____M_______A____|_______________| | +------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 5 7159.63 7754.85 7454.49 7503.202 240.43415 + 5 7503.91 8148.11 7566.35 7671.328 269.26662 No difference proven at 95.0% confidence That is, tests done aren't enough to show the difference, if any. Overall, I would recommend to don't bother and to compile --with-debug if there are chances you'll need it. -- Maxim Dounin http://nginx.org/en/donation.html From thijskoerselman at gmail.com Fri Sep 27 15:09:24 2013 From: thijskoerselman at gmail.com (Thijs Koerselman) Date: Fri, 27 Sep 2013 17:09:24 +0200 Subject: How to make proxy revalidate resource at origin Message-ID: Hi, I posted this on stackoverflow already but I thought I might have more luck here :) http://stackoverflow.com/questions/19023777/how-to-make-proxy-revalidate-resource-from-origin In short, I can't find a way to make nginx revalidate a cached resource with the origin. Whenever nginx is in front of my server, the origine never returns a 304. Nginx just gets a new copy whenever it expires, and I can't trigger a revalidate either. I've tried different approaches with cache-control headers but nginx doesn't seem to respect them. Cheers, Thijs -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at nginx.com Fri Sep 27 15:14:40 2013 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 27 Sep 2013 19:14:40 +0400 Subject: How to make proxy revalidate resource at origin In-Reply-To: References: Message-ID: On Sep 27, 2013, at 7:09 PM, Thijs Koerselman wrote: > Hi, > > I posted this on stackoverflow already but I thought I might have more luck here :) > > http://stackoverflow.com/questions/19023777/how-to-make-proxy-revalidate-resource-from-origin > > In short, I can't find a way to make nginx revalidate a cached resource with the origin. Whenever nginx is in front of my server, the origine never returns a 304. Nginx just gets a new copy whenever it expires, and I can't trigger a revalidate either. I've tried different approaches with cache-control headers but nginx doesn't seem to respect them. You may want to check this :) http://trac.nginx.org/nginx/roadmap > Cheers, > Thijs > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From thijskoerselman at gmail.com Fri Sep 27 15:28:15 2013 From: thijskoerselman at gmail.com (Thijs Koerselman) Date: Fri, 27 Sep 2013 17:28:15 +0200 Subject: How to make proxy revalidate resource at origin In-Reply-To: References: Message-ID: Aha, I'm hoping that will include If-None-Match, since I'm using etags instead of last modified dates. Is there maybe a 3rd party module that supports this already? On Fri, Sep 27, 2013 at 5:14 PM, Andrew Alexeev wrote: > On Sep 27, 2013, at 7:09 PM, Thijs Koerselman > wrote: > > > Hi, > > > > I posted this on stackoverflow already but I thought I might have more > luck here :) > > > > > http://stackoverflow.com/questions/19023777/how-to-make-proxy-revalidate-resource-from-origin > > > > In short, I can't find a way to make nginx revalidate a cached resource > with the origin. Whenever nginx is in front of my server, the origine never > returns a 304. Nginx just gets a new copy whenever it expires, and I can't > trigger a revalidate either. I've tried different approaches with > cache-control headers but nginx doesn't seem to respect them. > > You may want to check this :) > > http://trac.nginx.org/nginx/roadmap > > > Cheers, > > Thijs > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Sep 27 16:04:26 2013 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Sep 2013 12:04:26 -0400 Subject: How to make proxy revalidate resource at origin In-Reply-To: References: Message-ID: Maybe via Lua? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243204,243211#msg-243211 From nginx-forum at nginx.us Fri Sep 27 16:08:17 2013 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 27 Sep 2013 12:08:17 -0400 Subject: [ANNOUNCE] nginx-http-rdns module In-Reply-To: <52457823.5060508@flant.ru> References: <52457823.5060508@flant.ru> Message-ID: Here's a fix for a typo; ngx_http_rdns_module.c -l709 static void dns_handler(ngx_resolver_ctx_t * rctx) {; +l709 static void dns_handler(ngx_resolver_ctx_t * rctx) { Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243191,243213#msg-243213 From francis at daoine.org Fri Sep 27 17:34:53 2013 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Sep 2013 18:34:53 +0100 Subject: configuration of 1000's entries In-Reply-To: <52456A71.8090701@mozapps.com> References: <52456A71.8090701@mozapps.com> Message-ID: <20130927173453.GJ19345@craic.sysops.org> On Fri, Sep 27, 2013 at 04:22:25AM -0700, Shaun Savage wrote: Hi there, None of what follows is tested by me. So double-check before committing to anything :-) > I have many "virtual" paths on one nginx server. What I mean by this is > there can be many top level paths, where each one has a cookie, static > files, and a upstream server. The way I am doing it now is just > duplicate every path. > > Is there a way to do this 'faster' 'less writing' 'better'? I am > expecting 1000's of entries. Faster for you to write, or faster for nginx to read and process? Less writing for you, or for your "turn this input file into an nginx.conf" script? It looks like you could either auto-generate the conf file, so that it will be big; or keep it small by using a (probably, nested) "location" with regex and named captures. After you have both, you can test your workload on your hardware to see if there is a significant benefit of one over the other to you. Or you can just make one, and see if your workload on your hardware is handled well enough, and stop if it is. The examples here look like there is a regular pattern to them. Assuming that holds for all, then a simple template / macro system that does a string replacement should work -- you keep your "real config" as a list of a1, a2, etc; then run the script to generate the fragment of nginx.conf that can be pasted in or "include"d. Or you could try something like == location / { location ~ ^/(?[^/]*)/exe { # your "exe" stuff goes here, with $bit = a1 or a2 } location ~ ^/(?[^/]*) { # your "static" stuff goes here, with $bit = a1 or a2 } } == but even with that, you're likely to want to auto-generate the "upstream" sections externally to nginx anyway. (You'll need other top-level location{} blocks for anything that should not match the pattern you have shown, such as /login.) All the "alias" lines you've shown look equivalent to a single "root /var/www" at server-level -- that might simplify things, depending on what else you plan to use the server for. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Fri Sep 27 19:31:46 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 27 Sep 2013 12:31:46 -0700 Subject: Overhead when enabling debug? In-Reply-To: References: Message-ID: Hello! On Fri, Sep 27, 2013 at 6:16 AM, mex wrote: > i have a question regarding the --with-debug - option; do i have to expect > much overhead, > when compiling nginx with that option, but have it disabled per default? > We avoid enabling debug logs in production because when we really need to debug online issues, debug logs are just too expensive to emit and analyze (consider the box is currently experiencing an L7 attack). Also, --with-debug compiles in extra code paths which *could* have bugs and consume some extra CPU cycles even when you're not further enabling it in nginx.conf. Also, you'll never never have enough debug logs for answer those really hard questions in production. We only enable it for every day Nginx related development. For online trouble shooting, we're relying on systemtap to do low overhead dynamic tracing on various software stack levels, as demonstrated by my Nginx Systemtap Toolkit: https://github.com/agentzh/nginx-systemtap-toolkit and also all those samples in my stap++ project: https://github.com/agentzh/stapxx#samples Hope these help. Best regards, -agentzh From nginx-forum at nginx.us Sat Sep 28 01:43:23 2013 From: nginx-forum at nginx.us (aschlosberg) Date: Fri, 27 Sep 2013 21:43:23 -0400 Subject: Only recompile particular module when developing Message-ID: Hello I'm currently developing a new module and it's frustrating having to recompile every module each time. How can I skip all modules but my new one and then link the pre-existing object files? Does anyone have any other tips to streamline the module development / testing cycle? Thanks Arran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243222,243222#msg-243222 From nginx-forum at nginx.us Sat Sep 28 01:48:52 2013 From: nginx-forum at nginx.us (aschlosberg) Date: Fri, 27 Sep 2013 21:48:52 -0400 Subject: Only recompile particular module when developing In-Reply-To: References: Message-ID: <29f21e55b4a223352f2bbf744928abc7.NginxMailingListEnglish@forum.nginx.org> A little bit more experimenting and I discovered that skipping the configuration step and just running make && make install again will achieve what I wanted. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243222,243223#msg-243223 From nginx-forum at nginx.us Sun Sep 29 05:19:07 2013 From: nginx-forum at nginx.us (neoascetic) Date: Sun, 29 Sep 2013 01:19:07 -0400 Subject: if-none-match with proxy_cache : properly set headers In-Reply-To: <20130530112144.GR72282@mdounin.ru> References: <20130530112144.GR72282@mdounin.ru> Message-ID: <30d38276edaef9b51e6aa6523e5a3c6e.NginxMailingListEnglish@forum.nginx.org> Default approach doesn't work for me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239689,243238#msg-243238 From nginx-forum at nginx.us Sun Sep 29 05:47:37 2013 From: nginx-forum at nginx.us (neoascetic) Date: Sun, 29 Sep 2013 01:47:37 -0400 Subject: if-none-match with proxy_cache : properly set headers In-Reply-To: <64ea355f819de5e1aac4752687f9a5e4.NginxMailingListEnglish@forum.nginx.org> References: <64ea355f819de5e1aac4752687f9a5e4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2930bfa57fb185d78a7e9d9fae1eacd8.NginxMailingListEnglish@forum.nginx.org> It is good to have `$request_method` in the cache key since HEAD and GET methods may share same hash for content they point to Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239689,243239#msg-243239 From mat999 at gmail.com Sun Sep 29 11:59:22 2013 From: mat999 at gmail.com (SplitIce) Date: Sun, 29 Sep 2013 21:29:22 +0930 Subject: Overhead when enabling debug? In-Reply-To: References: Message-ID: Thank you agentz, that looks amazing. I will be including that in the next server software push. And also Maxim, thank you for you for taking the time to prepare those figures. I am going to my own testing and presuming that holds with our usecase / modules I will be deploying --with-debug myself. Its definitely worth it for <5% any day. Regards, Mathew On Sat, Sep 28, 2013 at 5:01 AM, Yichun Zhang (agentzh) wrote: > Hello! > > On Fri, Sep 27, 2013 at 6:16 AM, mex wrote: > > i have a question regarding the --with-debug - option; do i have to > expect > > much overhead, > > when compiling nginx with that option, but have it disabled per default? > > > > We avoid enabling debug logs in production because when we really need > to debug online issues, debug logs are just too expensive to emit and > analyze (consider the box is currently experiencing an L7 attack). > > Also, --with-debug compiles in extra code paths which *could* have > bugs and consume some extra CPU cycles even when you're not further > enabling it in nginx.conf. > > Also, you'll never never have enough debug logs for answer those > really hard questions in production. > > We only enable it for every day Nginx related development. For online > trouble shooting, we're relying on systemtap to do low overhead > dynamic tracing on various software stack levels, as demonstrated by > my Nginx Systemtap Toolkit: > > https://github.com/agentzh/nginx-systemtap-toolkit > > and also all those samples in my stap++ project: > > https://github.com/agentzh/stapxx#samples > > Hope these help. > > Best regards, > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emailgrant at gmail.com Sun Sep 29 17:33:31 2013 From: emailgrant at gmail.com (Grant) Date: Sun, 29 Sep 2013 10:33:31 -0700 Subject: root works, alias doesn't In-Reply-To: References: Message-ID: > Absolute vs Relative paths. > The log file line says it all: '/webalizer/index.html' doesn't exist, which > is not the path of the file you wanna serve... > > Take a look at the following examples showing how 'location' address is > replaced or completed (depending on absolute or relative 'alias' directive) > by 'alias' path: > http://nginx.org/en/docs/http/ngx_http_core_module.html#alias It works if I specify the full path for the alias. What is the difference between alias and root? I have root specified outside of the server block and I thought I could use alias to avoid specifying the full path again. > http://stackoverflow.com/questions/10084137/nginx-aliaslocation-directive I tried both of the following with the same result: location / { alias webalizer/; } location ~ ^/$ { alias webalizer/$1; } - Grant From reallfqq-nginx at yahoo.fr Sun Sep 29 19:20:35 2013 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 29 Sep 2013 15:20:35 -0400 Subject: root works, alias doesn't In-Reply-To: References: Message-ID: Hello, On Sun, Sep 29, 2013 at 1:33 PM, Grant wrote: > > It works if I specify the full path for the alias. What is the > difference between alias and root? I have root specified outside of > the server block and I thought I could use alias to avoid specifying > the full path again. > ?http://nginx.org/en/docs/http/ngx_http_core_module.html#alias http://nginx.org/en/docs/http/ngx_http_core_module.html#root The docs says that the requested filepath is constructed by concatenating root + URI That's for root. The docs also say that alias replaces the content directory (so it must be absolutely defined through alias). By default, the last part of the URI (after the last slash, so the file name) is searched into the directory specified by alias. alias doesn't construct itself based on root, it's totally independent, so by using that, you'll need to specify the directory absolutely, which is precisely what you wish to avoid. ? > I tried both of the following with the same result: > > location / { > alias webalizer/; > } > > location ~ ^/$ { > alias webalizer/$1; > } > ?For? ?what you wish to do, you might try the following: set $rootDir /var/www/localhost/htdocs root $rootDir/; location / { alias $rootDir/webalizer/; } alias is meant for exceptional overload of root in a location block, so I guess its use here is a good idea.? However, there seems to be no environmental propagation of some $root variable (which may be wanted by developers to avoid confusion and unwanted concatenation of values in the variables tree). $document_root and $realpath_root must be computed last, based on the value of the 'root' directive (or its 'alias' overload), so they can't be used indeed. I'd be glad to know the real reasons of the developers behind the absence of environmental propagation of some $root variable. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Sep 29 20:40:05 2013 From: nginx-forum at nginx.us (tempspace) Date: Sun, 29 Sep 2013 16:40:05 -0400 Subject: nginx struggling to accept connections during peak load Message-ID: <8977154af04bd1dccd22cde96e3a34d2.NginxMailingListEnglish@forum.nginx.org> Hello, I had posted to the mailing list earlier this week, but I managed to gather some new information that points directly to nginx (almost certainly my configuration), so I thought I'd post something more concise. I am running edge boxes which use nginx to terminate SSL which passes to haproxy on the same server. During our peak load time, we are experiencing intermittent slow connection issues which drives up our response time graphs from external sources. Every log within our infrastructure shows no problems, including the edge nginx that we're having issues with. Today, I was able to setup some boxes from different providers and run some curl tests in a loop. I setup a bash script that made a curl request to our edge nginx server for a specific API call. In another bash script, I made a curl request for the same API call, but bypassing nginx and going directly to haproxy that is located on the same exact box. By doing this, the curls to the nginx server showed intermittent big delays in the connection phase before nginx picks up the phone. The haproxy logs showed absolutely no issues at all in connecting. Because haproxy is on the same server, I believe that rules out anything related to a networking issue, both physical and kernel related. My SSL connections usually look like this from a cURL: time_namelookup: 0.001 time_connect: 0.035 time_appconnect: 0.109 time_pretransfer: 0.109 time_redirect: 0.000 time_starttransfer: 0.150 ---------- time_total: 0.150 During my peak load, they intermittently (every 3-5 seconds) look like this (though most of the time, 3 seconds) time_namelookup: 0.001 time_connect: 9.033 time_appconnect: 9.109 time_pretransfer: 9.109 time_redirect: 0.000 time_starttransfer: 9.148 ---------- time_total: 9.148 So, here is my nginx config. I'm running nginx 1.4.1. The system itself doesn't go beyond 30% CPU combined and all other metrics look good as well. What can I do better (I'm sure lots)? user www-data; worker_processes 11; # 12 cores, 24 with HT worker_rlimit_nofile 500000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { use epoll; multi_accept off; accept_mutex off; worker_connections 65536; } http { sendfile on; tcp_nopush on; tcp_nodelay on; proxy_buffering off; log_format access '$http_x_forwarded_for - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$host" "$request_time" "$upstream_response_time"'; upstream apiv2-ssl { server 127.0.0.1:xxxxxx max_fails=3 fail_timeout=15s; } upstream api { server 127.0.0.1:xxxxxx max_fails=3 fail_timeout=15s; } upstream secure { server 127.0.0.1:xxxxxx max_fails=3 fail_timeout=15s; } upstream facebook { server 127.0.0.1:xxxxx max_fails=3 fail_timeout=15s; } upstream testing { server 127.0.0.1:xxxxx max_fails=3 fail_timeout=15s; } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_apiv2.log access; error_log /var/log/nginx/error_apiv2.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxxx.key; ssl_session_cache shared:SSLv2:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://apiv2-ssl; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_apiv3.log access; error_log /var/log/nginx/error_apiv3.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxx.key; ssl_session_cache shared:SSLv3:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://api; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; set $msecstart "${msec}000"; if ($msecstart ~ "^(.*)\.(.*)") {set $msecout "t=$1$2";} proxy_set_header X-Request-Start $msecout; } } server { listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_apiv3.log access; error_log /var/log/nginx/error_apiv3.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxx.key; ssl_session_cache shared:SSLv3:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://testing; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_secure.log access; error_log /var/log/nginx/error_secure.log; gzip on; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxxx.key; ssl_session_cache shared:SSLsecure:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://secure; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_facebook.log access; error_log /var/log/nginx/error_facebook.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/xxx.key; ssl_session_cache shared:SSLfacebook:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://facebook; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access_api.log access; error_log /var/log/nginx/error_api.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxx.key; ssl_session_cache shared:SSLapi:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://api; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } server { listen x.x.x.x:443; listen x.x.x.x:443; ssl on; keepalive_timeout 5 5; access_log /var/log/nginx/access.log access; error_log /var/log/nginx/error.log; ssl_certificate /etc/nginx/certs/xxx.crt; ssl_certificate_key /etc/nginx/certs/xxx.key; ssl_session_cache shared:SSLv3:500m; ssl_ciphers ALL:!kEDH; location / { proxy_pass http://facebook; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243245,243245#msg-243245 From nginx-forum at nginx.us Sun Sep 29 22:50:17 2013 From: nginx-forum at nginx.us (laltin) Date: Sun, 29 Sep 2013 18:50:17 -0400 Subject: High response time at high concurrent connections Message-ID: I have a news site which is run by 4 tornado instances and nginx as reverse proxy in front of them. Pages are rendered and cached in memcached so generally the response time is less than 3 ms according to tornado logs. [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 3.41ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 1.96ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.48ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 4.09ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.43ms [I 130918 18:35:37 web:1462] 200 GET / (***.***.***.**) 2.49ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.25ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.39ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.93ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.08ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.72ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.02ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.70ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.74ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.85ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.60ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 1.83ms [I 130918 18:35:38 web:1462] 200 GET / (***.***.***.**) 2.65ms When I test this site with ab at concurrency level 1000 I get response times around 0.8 seconds. Here is the benchmark result: Document Length: 12036 bytes Concurrency Level: 1000 Time taken for tests: 7.974 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Keep-Alive requests: 10000 Total transferred: 122339941 bytes HTML transferred: 120549941 bytes Requests per second: 1254.07 [#/sec] (mean) Time per request: 797.407 [ms] (mean) Time per request: 0.797 [ms] (mean, across all concurrent requests) Transfer rate: 14982.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 20.8 0 86 Processing: 57 508 473.9 315 7014 Waiting: 57 508 473.9 315 7014 Total: 143 515 471.5 321 7014 Percentage of the requests served within a certain time (ms) 50% 321 66% 371 75% 455 80% 497 90% 1306 95% 1354 98% 1405 99% 3009 100% 7014 (longest request) I can handle ~1200 requests/seconds with 1000 concurrent connections and when I do the same benchmark with 100 concurrent connections I can again handle around 1200 requests/second but response time drops to ~80 ms. When it comes to real life with 1000 concurrent connections users will face 0.8 seconds response time which I think is a bad value. My question is why response times increase when concurrency level is increased? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243246#msg-243246 From agentzh at gmail.com Mon Sep 30 02:25:13 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sun, 29 Sep 2013 19:25:13 -0700 Subject: High response time at high concurrent connections In-Reply-To: References: Message-ID: Hello! On Sun, Sep 29, 2013 at 3:50 PM, laltin wrote: > > I can handle ~1200 requests/seconds with 1000 concurrent connections and > when I do the same benchmark with 100 concurrent connections I can again > handle around 1200 requests/second but response time drops to ~80 ms. > What you're seeing is a very common phenomenon. Tools like ab always try to load the target server to its extreme throughput, i.e., the 1200 req/sec you're seeing is already the throughput limit for that service in your server. Because you're already at the throughput limit, you have to sacrifice response latency for higher concurrency level otherwise you'll just have much higher throughput which is impossible (remember you're already on the limit? ;)). Regards, -agentzh From dmitry.shurupov at flant.ru Mon Sep 30 06:17:18 2013 From: dmitry.shurupov at flant.ru (Dmitry Shurupov) Date: Mon, 30 Sep 2013 10:17:18 +0400 Subject: [ANNOUNCE] nginx-http-rdns module In-Reply-To: References: <52457823.5060508@flant.ru> Message-ID: <5249176E.6030503@flant.ru> Thank you, it's merged: https://github.com/flant/nginx-http-rdns/commit/8c2ef68b1767590aab131cd5c3a2fd1955f0b3c6 27.09.2013 20:08, itpp2012 ?????: > Here's a fix for a typo; > > ngx_http_rdns_module.c > -l709 > static void dns_handler(ngx_resolver_ctx_t * rctx) {; > +l709 > static void dns_handler(ngx_resolver_ctx_t * rctx) { > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243191,243213#msg-243213 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ??????? ???????, ???????????? ???????? ??? ??????? http://flant.ru/ +7 (495) 721-10-27, ???. 442 +7 (926) 120-77-71 -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Sep 30 07:12:48 2013 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 30 Sep 2013 00:12:48 -0700 Subject: [ANN] ngx_openresty mainline version 1.4.2.9 released Message-ID: Hello folks! I am happy to announce that the new mainline version of ngx_openresty, 1.4.2.9, is now released: http://openresty.org/#Download Special thanks go to all the contributors for making this happen! Below is the complete change log for this release, as compared to the last (stable) release, 1.4.2.8: * bundled the new LuaRestyWebSocketLibrary 0.01. * this Lua library implements both a nonblocking WebSocket server and a nonblocking WebSocket client based on LuaNginxModule's cosocket API. thanks Hendrik Schumacher. * bundled the new LuaRestyLockLibrary 0.01. * this Lua library implements a simple nonblocking mutex lock API based on LuaNginxModule's shared memory dictionaries. Mostly useful for eliminating "dog-pile effects" and etc. thanks Sri Rao for the suggestion. * upgraded LuaRestyRedisLibrary to 0.16. * feature: added new redis commands bitcount, bitop, client, dump, hincrbyfloat, incrbyfloat, migrate, pexpire, pexpireat, psetex, pubsub, pttl, restore, and time. thanks alex-yam for the patch. * optimize: eliminated the table.insert() calls because they are slower than "tb[#tb + 1] = val". thanks alex-yam for the patch. this gives 1.9% speed up for trivial set and get examples when LuaJIT 2.0.2 is used and 4.9% speed up when LuaJIT's v2.1 git branch is used. * refactor: avoided using Lua 5.1's module() function for defining our Lua modules because it has bad side effects. * docs: do not use 0 (i.e., unlimited) max idle time in the set_keepalive() call in the code sample. * docs: added code samples for the redis commands "hmget" and "hmset". this has already become a FAQ. * docs: added the Redis Authentication section because it is already an FAQ. * docs: documented the "options" table argument for the connect() method. * docs: added a missing "local" keyword to the code sample. thanks Wendal Chen for the patch. * upgraded LuaRestyMemcachedLibrary to 0.12. * optimize: no longer use Lua tables and table.concat() to construct simple Memcached query strings. this gives 6.75% overall speed up for trivial "set" and "get" examples when LuaJIT 2.0.2 is used. * optimize: eliminated table.insert() because it is slower than "tb[#tb + 1] = val". * refactor: avoided using Lua's module() function for defining our Lua modules because it has bad side effects. * docs: use limited (10 sec) max idel timeout for in-pool connections in the code sample. * upgraded LuaNginxModule to 0.9.0. * feature: added support for raw downstream cosocket via the ngx.req.socket(true) API, upon which http upgrading protocols like WebSocket can be implemented with pure Lua (see LuaRestyWebSocketLibrary). This API can also be used to bypass the Nginx output filters and emit raw HTTP response headers and/or HTTP response bodies. thanks Hendrik Schumacher and aviramc. * bugfix: memory invalid reads might happen when ngx.flush(true) was used: the "ctx" struct could get freed in the middle of processing and we should save the state explicitly on the C stack. * bugfix: the standard Lua coroutine API was not available in the context of init_by_lua* and threw out the "no request found" error. thanks Wolf Tivy for the report. * bugfix: massive compatibility fixes for the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. * bugfix: Lua VM might run out of memory when "lua_code_cache" is off; now we always enforce a full Lua GC cycle right after unloading most of the loaded Lua modules when the Lua code cache is turned off. * change: raised the "lua_code_cache is off" warning to an alert. * upgraded NginxDevelKit to 0.2.19. * bugfix: fixed warnings from the Microsoft Visual C++ compiler. thanks Edwin Cleton for the report. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1004002 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From savages at mozapps.com Mon Sep 30 07:18:41 2013 From: savages at mozapps.com (shaun) Date: Mon, 30 Sep 2013 00:18:41 -0700 Subject: extra ? In-Reply-To: <20130927173453.GJ19345@craic.sysops.org> References: <52456A71.8090701@mozapps.com> <20130927173453.GJ19345@craic.sysops.org> Message-ID: <524925D1.40605@mozapps.com> I am have a problem I am very confused. There is an extra "?" added to the end of the rewrite. I have no idea why, I look at the logs, and is magically appears. I want to reload the login page, but nothing happens, any ideas? location /login/ { if ($args) { set $lin 1; rewrite ^/login/login(.*)$ /auth$1; } alias /var/www/login/; } location /hzc/ws { if ($http_cookie !~* 'hzc') { rewrite ^/hzc(.*)$ /login/; } proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-Port $remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_pass http://gofw; } -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: text/x-log Size: 9166 bytes Desc: not available URL: From r at roze.lv Mon Sep 30 07:30:07 2013 From: r at roze.lv (Reinis Rozitis) Date: Mon, 30 Sep 2013 10:30:07 +0300 Subject: High response time at high concurrent connections In-Reply-To: References: Message-ID: <598406E1B7B44BB785F6A9413E8F528B@MezhRoze> > My question is why response times increase when concurrency level is > increased? What are your worker_processes and worker_connections values? p.s. also to be sure is the OS open files limit increased from the typical default of 1024 rr From nginx-forum at nginx.us Mon Sep 30 12:11:13 2013 From: nginx-forum at nginx.us (revirii) Date: Mon, 30 Sep 2013 08:11:13 -0400 Subject: upstream+ip_hash: hash valid global? In-Reply-To: References: Message-ID: hmm... no one? Is this unknown or a secret? At least i wasn't able to find any detailed documentation about this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243162,243259#msg-243259 From mdounin at mdounin.ru Mon Sep 30 12:19:27 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 16:19:27 +0400 Subject: extra ? In-Reply-To: <524925D1.40605@mozapps.com> References: <52456A71.8090701@mozapps.com> <20130927173453.GJ19345@craic.sysops.org> <524925D1.40605@mozapps.com> Message-ID: <20130930121927.GA56438@mdounin.ru> Hello! On Mon, Sep 30, 2013 at 12:18:41AM -0700, shaun wrote: > I am very confused. There is an extra "?" added to the end of the > rewrite. I have no idea why, I look at the logs, and is magically > appears. Could you please point out where is the "?" which confuses you? Please note that in debug logs like > 2013/09/29 23:42:00 [debug] 3386#0: *1 internal redirect: "/login/index.html?" the "?" character is printed unconditionally as a separator between r->uri and r->args. > I want to reload the login page, but nothing happens, any ideas? As per debug log you've provided, the "/var/www/login/index.html" file is properly returned as configured. Not sure what you want nginx to do instead. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Mon Sep 30 12:32:21 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 16:32:21 +0400 Subject: upstream+ip_hash: hash valid global? In-Reply-To: References: Message-ID: <20130930123221.GB56438@mdounin.ru> Hello! On Mon, Sep 30, 2013 at 08:11:13AM -0400, revirii wrote: > hmm... no one? Is this unknown or a secret? At least i wasn't able to find > any detailed documentation about this. It's an implementation detail. As of now, two identical upstream{} blocks will map the same ip address to the same peer's number. But it's not something guaranteed. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Sep 30 13:16:14 2013 From: nginx-forum at nginx.us (revirii) Date: Mon, 30 Sep 2013 09:16:14 -0400 Subject: upstream+ip_hash: hash valid global? In-Reply-To: <20130930123221.GB56438@mdounin.ru> References: <20130930123221.GB56438@mdounin.ru> Message-ID: Hi, thanks for your answer :-) > It's an implementation detail. As of now, two identical > upstream{} blocks will map the same ip address to the same peer's > number. But it's not something guaranteed. ok, this is the behaviour when the upstreams are identical, i.e. they have the same backends. That would be ok for me. But what if the backends are not identical? My example was: upstream one { server backendA; server backendB; server backendC; } upstream two { server backendA; server backendD; server backendE; } If a user sends a request - > upstream:one -> backendA and then makes a request where upstream:two is used, is he then sent to backendA as well? Ok, this would be nice to know, but it's not that important ;-) revirii Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243162,243262#msg-243262 From thijskoerselman at gmail.com Mon Sep 30 13:42:50 2013 From: thijskoerselman at gmail.com (Thijs Koerselman) Date: Mon, 30 Sep 2013 15:42:50 +0200 Subject: Using add_header at server level context Message-ID: >From the add_header docs I understand that it works at location, http and server context. But when I use add_header at the server level I don't see the headers being added to the response. For example my server config starts with: server { listen 9088; server_name localhost; tcp_nodelay on; etag on; access_log on; add_header X-AppServer $upstream_addr; add_header X-AppServer-Status $upstream_status; add_header X-Cache $upstream_cache_status; Am I missing something or is this just not working at the server level for some reason? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Sep 30 14:25:00 2013 From: nginx-forum at nginx.us (laltin) Date: Mon, 30 Sep 2013 10:25:00 -0400 Subject: High response time at high concurrent connections In-Reply-To: References: Message-ID: <4bbe928a919b3ee7755228bff5db9655.NginxMailingListEnglish@forum.nginx.org> But looking at tornado logs I expect around 2000reqs/sec. Assuming that each request is handled in 2ms one instance can handle 500reqs/sec and with 4 instances it sholud be 2000req/sec. But it is stuck at 1200reqs/sec I wonder why it is stuck at that point? Does increasing the number of instances change the result? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243246,243255#msg-243255 From francis at daoine.org Mon Sep 30 14:30:06 2013 From: francis at daoine.org (Francis Daly) Date: Mon, 30 Sep 2013 15:30:06 +0100 Subject: Using add_header at server level context In-Reply-To: References: Message-ID: <20130930143006.GK19345@craic.sysops.org> On Mon, Sep 30, 2013 at 03:42:50PM +0200, Thijs Koerselman wrote: Hi there, > From the add_header docs I understand that it works at location, http and > server context. But when I use add_header at the server level I don't see > the headers being added to the response. > Am I missing something or is this just not working at the server level for > some reason? You're missing something. You're either missing that if the second argument to add_header expands to empty, then the header is not added; or that configuration directive inheritance is by replacement, not addition. == server { listen 8080; add_header X-Server server-level; add_header X-Surprise $http_surprise; location /one { return 200 "location one"; } location /two { return 200 "location two"; add_header X-Location two; } } == Compare the outputs you actually get from curl -i http://127.0.0.1:8080/one curl -i http://127.0.0.1:8080/two curl -i -H Surprise:value http://127.0.0.1:8080/one with what you expect to get. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Sep 30 14:51:28 2013 From: nginx-forum at nginx.us (jlintz) Date: Mon, 30 Sep 2013 10:51:28 -0400 Subject: enabling keepalive increases request time Message-ID: HI, I'm using Nginx 1.2.9 behind Amazon's ELB. We recently moved our servers behind the ELB and in testing enabling keepalives between the server and the ELB we are seeing request times double from around 25ms to 50ms. I tried disabling postpone_output since I thought it was a buffering issue, but still saw increased response times. Anyone have any ideas what could explain this? Enabling keepalive , improves other metrics for us such as TCP timeouts and CPU utilization, so we'd like to enable if we can. The nginx servers are just proxying to backend servers , also utilizing keep alive on the upstream. Below are the main options we are using worker_processes 2; worker_rlimit_nofile 500000; events { worker_connections 20000; } http { sendfile on; tcp_nopush off; tcp_nodelay off; gzip off; keepalive_requests 100000; output_buffers 8 64k; postpone_output 0; server { listen *:80 default deferred backlog=16384; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243267,243267#msg-243267 From mdounin at mdounin.ru Mon Sep 30 15:06:36 2013 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Sep 2013 19:06:36 +0400 Subject: upstream+ip_hash: hash valid global? In-Reply-To: References: <20130930123221.GB56438@mdounin.ru> Message-ID: <20130930150636.GH56438@mdounin.ru> Hello! On Mon, Sep 30, 2013 at 09:16:14AM -0400, revirii wrote: > Hi, > > thanks for your answer :-) > > > It's an implementation detail. As of now, two identical > > upstream{} blocks will map the same ip address to the same peer's > > number. But it's not something guaranteed. > > ok, this is the behaviour when the upstreams are identical, i.e. they have > the same backends. That would be ok for me. > > But what if the backends are not identical? My example was: > > upstream one { > server backendA; > server backendB; > server backendC; > } > > upstream two { > server backendA; > server backendD; > server backendE; > } > > If a user sends a request - > upstream:one -> backendA and then makes a > request where upstream:two is used, is he then sent to backendA as well? Ok, > this would be nice to know, but it's not that important ;-) As long as all servers configured map to the same number of peers (in most simple case - each "backendX" resolves to a single ip address) - such backends are identical for the above sentence, and the request will be sent to "backendA" in both cases. But, again, this isn't something guaranteed. -- Maxim Dounin http://nginx.org/en/donation.html