From mdounin at mdounin.ru Wed Feb 1 00:04:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Feb 2012 04:04:24 +0400 Subject: rtsp over http response blocked by nginx In-Reply-To: <2acf6c791d17ab41322379746c1e452e@ruby-forum.com> References: <2acf6c791d17ab41322379746c1e452e@ruby-forum.com> Message-ID: <20120201000424.GR67687@mdounin.ru> Hello! On Tue, Jan 31, 2012 at 02:24:50PM +0100, eric elg wrote: > Hello > > I am trying to use NGINX as reverse proxy for streaming using QuickTime > and rtsp over http. > QuicKTime send an http GET to my reverse proxy in a public zone, then my > reverse proxy send the request to my video server in a private zone. > What I can see is that the http GET request is received by NGINX on the > reverse proxy, then by my video server, which responds immediately with > a 200 OK. > This response is received by the reverse proxy, but it seems that NGINX > block the response. (see log file attached). The log provided suggests that backend neither close connection nor send anything. I suspect it's waiting for another request from the client before sending additional data, but didn't get it due to buffering in nginx. You may try adding proxy_buffering off; to see if it helps. On the other hand, the following link: http://developer.apple.com/quicktime/icefloe/dispatch028.html suggests that rtsp over http uses POST requests with some arbitrary large Content-Length. It's likely to be next problem you'll encounter. And this is not going to work through nginx. Maxim Dounin From mdounin at mdounin.ru Wed Feb 1 01:23:09 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Feb 2012 05:23:09 +0400 Subject: rtsp over http response blocked by nginx In-Reply-To: <20120201000424.GR67687@mdounin.ru> References: <2acf6c791d17ab41322379746c1e452e@ruby-forum.com> <20120201000424.GR67687@mdounin.ru> Message-ID: <20120201012309.GU67687@mdounin.ru> Hello! On Wed, Feb 01, 2012 at 04:04:24AM +0400, Maxim Dounin wrote: > Hello! > > On Tue, Jan 31, 2012 at 02:24:50PM +0100, eric elg wrote: > > > Hello > > > > I am trying to use NGINX as reverse proxy for streaming using QuickTime > > and rtsp over http. > > QuicKTime send an http GET to my reverse proxy in a public zone, then my > > reverse proxy send the request to my video server in a private zone. > > What I can see is that the http GET request is received by NGINX on the > > reverse proxy, then by my video server, which responds immediately with > > a 200 OK. > > This response is received by the reverse proxy, but it seems that NGINX > > block the response. (see log file attached). > > The log provided suggests that backend neither close connection > nor send anything. I suspect it's waiting for another request > from the client before sending additional data, but didn't get it > due to buffering in nginx. You may try adding > > proxy_buffering off; > > to see if it helps. > > On the other hand, the following link: > > http://developer.apple.com/quicktime/icefloe/dispatch028.html > > suggests that rtsp over http uses POST requests with some > arbitrary large Content-Length. It's likely to be next problem > you'll encounter. And this is not going to work through nginx. Hm, the https://helixcommunity.org/viewcvs/protocol/common/util/hxcloakedsocket.cpp?view=markup claims it should switch to "multi-post mode" then, so it looks like I was wrong and rtsp over http will actually work. So the only real problem is response buffering in nginx, which may be easily swithed off (see above). Maxim Dounin From agentzh at gmail.com Wed Feb 1 04:07:51 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 1 Feb 2012 12:07:51 +0800 Subject: [ANN] ngx_lua module v0.4.1 released! In-Reply-To: References: Message-ID: Hi, folks! I'm happy to announce the v0.4.1 release of our ngx_lua module. You can get the release tarball from the download page: ? ?https://github.com/chaoslawful/lua-nginx-module/tags Here's the change log compared to the last formal release, v0.4.0: * bugfix: ngx.exit, ngx.redirect, ngx.exec, and ngx.req.set_uri(uri, true) could return (they should never return as per the documentation). this bug had appeared in ngx_lua v0.3.1rc4 and ngx_openresty 1.0.6.13. thanks @cyberty for reporting it. * bugfix: ngx_http_lua_header_filter_init was called with an argument which actually accepts none. this could cause compilation errors at least with gcc 4.3.4 as reported in github issue #80. thanks bigplum (Simon). * bugfix: fixed all the warnings from the clang static analyzer. * feature: allow use of the DDEBUG macro from the outside (via the -D DDEBUG=1 C compiler opton). You can also view the HTML version of this change log here: http://wiki.nginx.org/HttpLuaModule#v0.4.1 Special thanks go to all of our contributors and users! I'm currently focusing on the ngx_lua cosocket branch which will become the v0.5.x series. With the upcoming cosocket support, we'll be able to code up nonblocking network client drivers for various backend services (memcached, mysql, redis, to name a few) in pure Lua, and these drivers could be even faster than the old approach that requires combining nginx subrequests and nginx upstream modules. This Nginx module embeds the Lua 5.1 interpreter or LuaJIT 2.0 into the nginx core and integrates the powerful Lua threads (aka Lua coroutines) into the nginx event model by means of nginx subrequests. Unlike Apache's mod_lua and Lighttpd's mod_magnet, Lua code written atop this module can be 100% non-blocking on network traffic as long as you use the ngx.location.capture or ngx.location.capture_multi interfaces to let the Nginx core do all your requests to mysql, postgresql, memcached, redis, upstream http web services, and etc etc etc. This module is also included and enabled by default in our ngx_openresty bundle: http://openresty.org/ You can find the complete documentation for this module on the following wiki page: ? ?http://wiki.nginx.org/HttpLuaModule And you always get the latest source code from the git repository here: ? ?https://github.com/chaoslawful/lua-nginx-module Have fun! -agentzh From agentzh at gmail.com Wed Feb 1 04:57:09 2012 From: agentzh at gmail.com (agentzh) Date: Wed, 1 Feb 2012 12:57:09 +0800 Subject: [ANN] ngx_openresty stable version 1.0.10.48 released! In-Reply-To: References: Message-ID: Hello, folks! I'm happy to announce that the new stable release of ngx_openresty, 1.0.10.48, has just been kicked out of door: http://openresty.org/#Download This is the 3rd stable release of ngx_openresty that is based on the Nginx core 1.0.10, which is a maintenance release. Special thanks go to all our contributors and users to help make this release happen :) Here goes the complete change log for this release, as compared to the last stable release, 1.0.10.44, released about two weeks ago: - upgraded LuaNginxModule to 0.4.1. - bugfix: ngx_http_lua_header_filter_init was called with an argument which actually accepts none. this could cause compilation errors at least with gcc 4.3.4 as reported in github issue #80. thanks bigplum (Simon). - bugfix: fixed all the warnings from the clang static analyzer. - bugfix: ngx.exit , ngx.redirect , ngx.exec , and ngx.req.set_uri(uri, true) could return (they should never return as per the documentation). this bug had appeared in ngx_lua v0.3.1rc4 and ngx_openresty 1.0.6.13. thanks @cyberty for reporting it. - feature: allow use of the DDEBUG macro from the outside (via the -D DDEBUG=1 cc opton). - upgraded DrizzleNginxModule to v0.1.2rc6. - bugfix: fixed all the warnings from the clang static analyzer. - feature: allow use of the DDEBUG macro from the outside (via the -D DDEBUG=1 cc opton). - upgraded EchoNginxModule to 0.38rc1, SetMiscNginxModule to 0.22rc5, HeadersMoreNginxModule to 0.17rc1, and MemcNginxModule to 0.13rc3, to allow use of the DDEBUG macro from the outside (via the -D DDEBUG=1 cc opton). As always, you're welcome to report bugs and feature requests either here or directly to me :) OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies. By taking adantage of various well-designed Nginx modules, OpenRestyeffectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that is capable to handle 10K+ connections. OpenResty aims to run your server-side web app completely in the Nginx server, leveraging Nginx's event model to do non-blocking I/O not only with the HTTP clients, but also with remote backends like MySQL, PostgreSQL, Memcached, and Redis. You can find more details on the homepage of ngx_openresty here: http://openresty.org Enjoy! -agentzh -------------- next part -------------- An HTML attachment was scrubbed... URL: From peacech at gmail.com Wed Feb 1 05:17:13 2012 From: peacech at gmail.com (Charles) Date: Wed, 1 Feb 2012 12:17:13 +0700 Subject: error_page 413 in nginx 1.1.14 Message-ID: Hi, I was using nginx 1.1.14, and I have an error_page configuration like this client_body_max_size 10m; error_page 413 /error/http/413; When I POST-ed a 15mb file to /error/test, what happened is that /error/test is being displayed again with $_SERVER['REQUEST_METHOD'] is 'GET' (I was using PHP) but the browser still see the request as POST. I have checked that /error/http was never called. Is this a bug or an error in the configuration? Thanks in advance. From reallfqq-nginx at yahoo.fr Wed Feb 1 05:32:58 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 Feb 2012 00:32:58 -0500 Subject: Files in location Message-ID: Hello, I would like to set up a rule redirecting the root location to another address, but to serve files inside this location directly. I tried the following: location = / { rewrite .* http://inter.net permanent; } http://intra.net/ is redirected as wanted http://intra.net/codes.txt is also redirected... but that is not intended How to avoid that? SHould I try something with an if or a try_files directive on the $uri variable? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 1 06:02:07 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 01 Feb 2012 06:02:07 +0000 Subject: Files in location In-Reply-To: References: Message-ID: <87lionf5wg.wl%appa@perusio.net> On 1 Fev 2012 05h32 WET, reallfqq-nginx at yahoo.fr wrote: > [1 ] > [1.1 ] > Hello, > > I would like to set up a rule redirecting the root location to > another address, but to serve files inside this location directly. > I tried the following: location = / { rewrite .* http://inter.net > permanent; } location = / { return 301 http://inter.net; } location / { # serve the files here without a redirect } --- appa From reallfqq-nginx at yahoo.fr Wed Feb 1 07:08:01 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 Feb 2012 02:08:01 -0500 Subject: Files in location In-Reply-To: <87lionf5wg.wl%appa@perusio.net> References: <87lionf5wg.wl%appa@perusio.net> Message-ID: Thanks, It appeared that with some other tests my solution was working...... Browser cache troubles, it seems. Sorry to have bothered you :o\ Thanks anyway, --- *B. R.* On Wed, Feb 1, 2012 at 01:02, Ant?nio P. P. Almeida wrote: > On 1 Fev 2012 05h32 WET, reallfqq-nginx at yahoo.fr wrote: > > > [1 ] > > [1.1 ] > > Hello, > > > > I would like to set up a rule redirecting the root location to > > another address, but to serve files inside this location directly. > > I tried the following: location = / { rewrite .* http://inter.net > > permanent; } > > location = / { > return 301 http://inter.net; > } > > location / { > # serve the files here without a redirect > } > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at sikora.nu Wed Feb 1 01:18:38 2012 From: piotr at sikora.nu (Piotr Sikora) Date: Wed, 1 Feb 2012 02:18:38 +0100 Subject: Module Advice - Cassandra / Thrift In-Reply-To: References: <0c6891000f475719ea484a407ee60e1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <304B7C10FDD644DB85A9079CCD8EBE8F@Desktop> Hi, > Piotr Sikora > has done something quite generic in his ngx_zeromq module, maybe Piotr > can give some advice here. Thrift is doing only (de-)serialization, so the easiest way would be to just hook into nginx's upstream module without using any trickery or 3rd-party libraries (this requires writing your own (de-)serialization logic). Take a look at FastCGI module, for example. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From anmartin at admin-auf-zeit.de Wed Feb 1 08:39:45 2012 From: anmartin at admin-auf-zeit.de (Andreas Martin) Date: Wed, 01 Feb 2012 09:39:45 +0100 Subject: status 0 on proxy MISS Message-ID: <4F28FA51.4090308@admin-auf-zeit.de> Hello. I'm using nginx as proxy for an apache. Somtimes, on a MISS, the logged status is 0, instead of 200 (or anything else). The log format is configured like this: log_format cache '[$time_local] $remote_addr - $request_time - ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '"$request" ($status) ' '"$http_user_agent" '; The value 0 occurs for the variable $status Has anyone any idea, what the status 0 means or why it occurs? Kind regards Andreas From nginx-forum at nginx.us Wed Feb 1 11:32:35 2012 From: nginx-forum at nginx.us (lockev3.0) Date: Wed, 01 Feb 2012 06:32:35 -0500 Subject: Default_server catch all block not working In-Reply-To: References: <87vco0vpz6.wl%appa@perusio.net> <8685c4f3e7a9b75a055b4057913992e7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <170f1be111c44da4f853f612255ac573.NginxMailingListEnglish@forum.nginx.org> Ok, first of all thanks a lot Antonio for your answer. I've followed your instructions adding following block to the very beggining of my only one nginx/site-enabled (in just one of my 2 balanced nginx): server { listen 80 default_server; server_name _; proxy_intercept_errors on; error_log /var/log/nginx/000default-error.log debug; access_log /var/log/nginx/000default-access.log; return 444; } below this block came the 3 vhost I serve, all of them with plenty of server_name's. More "clues" I surely shoud have given at my first post :: .- Architecture :: AWS Elastic Load Balancer ====> 2 nginx each one proxying to its own apache2 .- My php application relies a lot on apache redirects in htaccess of the style: .*whatever/ .*whatever.php .- As soon as i enable the above block, Server seems to respond Ok but the 000default-access.log starts to be filled with: ..-[A] Plenty of requests responded with the 444 code .- [B] Plenty of strange request like this ::: [01/Feb/2012:12:13:40 +0100] "-" 400 0 "-" "-" .- In less than 3 minutes, suddenly the above access log stops receiving that bunch of 444 responded requests[A] and it only shows the [B] from time to time (some buffer overloaded?) Just to say, the debug level and logs probably I did not use it well as it is not adding actual extra info. should it be added to in the rest of the server blocks below the default one? like an X-File to me ......don't know what else to do or try Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221572,221854#msg-221854 From bhuntington25 at gmail.com Wed Feb 1 16:31:14 2012 From: bhuntington25 at gmail.com (Brad Huntington) Date: Wed, 1 Feb 2012 11:31:14 -0500 Subject: UserDir Message-ID: I'm having trouble getting UserDir to work. I think it has something to do with the PHP fastcgi options. Can someone please take a look at my configuration and double check what I have? Any help is appreciated. Thanks. http://pastebin.com/3wRTVXJW -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Wed Feb 1 17:27:07 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 2 Feb 2012 00:27:07 +0700 Subject: UserDir In-Reply-To: References: Message-ID: On Wed, Feb 1, 2012 at 11:31 PM, Brad Huntington wrote: > I'm having trouble getting UserDir to work. I think it has something to do > with the PHP fastcgi options. Can someone please take a look at my > configuration and double check what I have? Any help is appreciated. Thanks. > > http://pastebin.com/3wRTVXJW > http://pastebin.com/6KHDwhvx note that using this config, anyone can read (and possibly write) others' files with little effort thanks to php process running as single user. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From francis at daoine.org Wed Feb 1 17:30:48 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Feb 2012 17:30:48 +0000 Subject: UserDir In-Reply-To: References: Message-ID: <20120201173048.GA22076@craic.sysops.org> On Wed, Feb 01, 2012 at 11:31:14AM -0500, Brad Huntington wrote: Hi there, > I'm having trouble getting UserDir to work. I think it has something to do > with the PHP fastcgi options. Can someone please take a look at my > configuration and double check what I have? Any help is appreciated. Thanks. What's the trouble you're seeing? GET /~user/file fails? Or GET /file.php fails? Or GET /~user/file.php fails? > http://pastebin.com/3wRTVXJW In nginx, each request is handled by exactly one location{} block. Only the configuration in, or inherited into, that location{} matters. You have one regex location for .php urls, and one regex location for "userdir" urls. A single request won't match both of those. If you can describe your desired url->file mapping, it will likely be easier to ensure that the nginx.conf matches that. Good luck, f -- Francis Daly francis at daoine.org From lists at wildgooses.com Wed Feb 1 19:24:56 2012 From: lists at wildgooses.com (Ed W) Date: Wed, 01 Feb 2012 19:24:56 +0000 Subject: Apply URL encoding during rewrite rule Message-ID: <4F299188.4070205@wildgooses.com> Hi, I need to do a redirect and pass the old URL to the redirected destination. How might I achieve this using nginx? I tried a rewrite rule simply: rewrite ^/(.*) $scheme://$server_addr/?redirect_to=$scheme://$host$request_uri redirect; However, this simply appends the old url verbatim on the end of the ?redirect_to parameter I tried a few other variables, but I don't see any way to ask Nginx to url-encode something for me? Any other thoughts on ways to achieve the desired effect? The redirected to destination is a web app under my control - we could proxy the initial request to the web app and have it generate the redirect, but I was hoping to decouple things Thanks for any thoughts Ed W From appa at perusio.net Wed Feb 1 19:38:43 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 01 Feb 2012 19:38:43 +0000 Subject: Apply URL encoding during rewrite rule In-Reply-To: <4F299188.4070205@wildgooses.com> References: <4F299188.4070205@wildgooses.com> Message-ID: <8739aufinw.wl%appa@perusio.net> On 1 Fev 2012 19h24 WET, lists at wildgooses.com wrote: > Hi, I need to do a redirect and pass the old URL to the redirected > destination. How might I achieve this using nginx? > > I tried a rewrite rule simply: rewrite ^/(.*) > $scheme://$server_addr/?redirect_to=$scheme://$host$request_uri > redirect; To escape do a rewrite with a capture. Above you're capturing but not using the numeric group $1. Try: rewrite ^/(.*)$ $scheme://$server_addr/?redirect_to=$scheme://$host/$1 redirect. --- appa From lists at wildgooses.com Wed Feb 1 20:59:41 2012 From: lists at wildgooses.com (Ed W) Date: Wed, 01 Feb 2012 20:59:41 +0000 Subject: Apply URL encoding during rewrite rule In-Reply-To: <8739aufinw.wl%appa@perusio.net> References: <4F299188.4070205@wildgooses.com> <8739aufinw.wl%appa@perusio.net> Message-ID: <4F29A7BD.5030200@wildgooses.com> On 01/02/2012 19:38, Ant?nio P. P. Almeida wrote: > On 1 Fev 2012 19h24 WET, lists at wildgooses.com wrote: > >> Hi, I need to do a redirect and pass the old URL to the redirected >> destination. How might I achieve this using nginx? >> >> I tried a rewrite rule simply: rewrite ^/(.*) >> $scheme://$server_addr/?redirect_to=$scheme://$host$request_uri >> redirect; > To escape do a rewrite with a capture. Above you're capturing but not > using the numeric group $1. Try: > > rewrite ^/(.*)$ $scheme://$server_addr/?redirect_to=$scheme://$host/$1 redirect. > > OK, that nearly works, but it adds an extra & when nginx appends the existing params. This is closer: rewrite ^(.*)$ $scheme://$server_addr/?redirect_to=$scheme%3A%2F%2F$host$1%3F$args? redirect; However, for some reason the $1 is still *decoded* ? From reading various past threads on this, I thought that a regexp would grab the still encoded $uri part? So I don't see any difference between using $1 or $uri to capture the uri, and in both cases it will decode a uri such as: http://www/as%2Fdf as http://www/as/df Am I missing something that is causing this to happen? How to grab the $uri without decoding? Nearly there! Thanks Ed W From reallfqq-nginx at yahoo.fr Wed Feb 1 21:45:44 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 Feb 2012 16:45:44 -0500 Subject: autoindex directive in 'if' section In-Reply-To: References: Message-ID: Hello, Finally, a little something still annoying me... Since the location is based on the new /post_redirect/ folder, the indexing page shows: Index of /post_redirect/******* It is unaesthetic to me, since I would like the original URI there. :o\ Any trick on that? --- *B. R.* On Fri, Jan 27, 2012 at 22:16, B.R. wrote: > So simple... > Thanks for everything Max, it works perfectly! ;o) > --- > *B. R.* > > > On Fri, Jan 27, 2012 at 19:36, Max wrote: > >> 28 ?????? 2012, 03:11 ?? "B.R." **: > To end the job, I need to get rid >> of the 'post_rewrite' part of the path, > so my files get served as >> '/path/to/files' and not > '/post_rewrite/path/to/files'. > I tried >> rewrite, but of course that gets me out of the current location > >> block...... > > Should I replace the content of $uri? Is there any variable >> regex > substitution mechanism in nginx? > Couldn't find that on Google nor >> wiki. Maybe I am not searching for the > right thing. >> http://wiki.nginx.org/HttpCoreModule#alias Use alias instead of root, it >> does exactly what you need: location ~ ^/post_rewrite/(.*)$ { internal; >> autoindex on; alias /sandbox/$1; } Max >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Thu Feb 2 00:08:12 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 02 Feb 2012 04:08:12 +0400 Subject: autoindex directive in 'if' section In-Reply-To: References: Message-ID: 02 ??????? 2012, 01:46 ?? "B.R." : > Since the location is based on the new /post_redirect/ folder, the indexing page shows: > Index of /post_redirect /******* > > It is unaesthetic to me, since I would like the original URI there. :o\ > Any trick on that? http://wiki.nginx.org/HttpLogModule Set whatever log_format you want inside the /post_redirect/ location block. http://wiki.nginx.org/HttpCoreModule#log_subrequest You can also turn log_subrequest off inside the if block to prevent the rewrite from showing up in the logs. Max From nginxyz at mail.ru Thu Feb 2 00:28:57 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 02 Feb 2012 04:28:57 +0400 Subject: autoindex directive in 'if' section In-Reply-To: References: Message-ID: 02 ??????? 2012, 01:46 ?? "B.R." : > Since the location is based on the new /post_redirect/ folder, the indexing > page shows: > Index of /post_redirect/******* > > It is unaesthetic to me, since I would like the original URI there. :o\ > Any trick on that? Sorry, I misunderstood your question. If you want autoindex to mask the directory name, you'll have to change the source code yourself: File src/http/modules/ngx_http_autoindex_module.c Function: ngx_http_autoindex_handler(ngx_http_request_t *r) Variable: dir You could extend the module to include a new command for such masking (autoindex_mask_dir "/dir_to_display/"), it shouldn't take more than 20 minutes to code. Max From reallfqq-nginx at yahoo.fr Thu Feb 2 00:51:05 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 1 Feb 2012 19:51:05 -0500 Subject: autoindex directive in 'if' section In-Reply-To: References: Message-ID: OK, thanks for the answer! --- *B. R.* On Wed, Feb 1, 2012 at 19:28, Max wrote: > > 02 ??????? 2012, 01:46 ?? "B.R." : > > > Since the location is based on the new /post_redirect/ folder, the > indexing > > page shows: > > Index of /post_redirect/******* > > > > It is unaesthetic to me, since I would like the original URI there. :o\ > > Any trick on that? > > Sorry, I misunderstood your question. If you want autoindex to mask > the directory name, you'll have to change the source code yourself: > > File src/http/modules/ngx_http_autoindex_module.c > Function: ngx_http_autoindex_handler(ngx_http_request_t *r) > Variable: dir > > You could extend the module to include a new command for > such masking (autoindex_mask_dir "/dir_to_display/"), it shouldn't > take more than 20 minutes to code. > > Max > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre.l.caron at gmail.com Thu Feb 2 03:54:16 2012 From: andre.l.caron at gmail.com (=?UTF-8?B?QW5kcsOpIENhcm9u?=) Date: Wed, 1 Feb 2012 22:54:16 -0500 Subject: Would like to implement WebSocket support Message-ID: ?Hi all! I've implemented the WebSocket wire protocol as an incremental parser, making it suitable for use in high-end asynchronous servers such as NGINX. The code is open source (BSD-licenced) and available on GitHub[1]. I plan on using it to tunnel other protocols (svn and git, in particular) over HTTP. I have a nice setup that works well on the client side, but I'd like to integrate this with my existing NGINX stack with includes virtual hosting and a bunch of other stuff (I can't use another server directly unless I use a non-default port). To the best of my understanding, NGINX has no support for WebSockets. The HTTP proxy module does not support HTTP 1.1 and WebSockets are incompatible with both SCGI and FastCGI because of the "Content-Length" problem (it is assumed to be 0 if unspecified). I'd like to implement an NGINX module that specifically handles WebSockets so that I can integrate my tunnel in my NGINX setup. I have absolutely no experience with the NGINX source code, but I've found a nice guide on writing NGINX modules[2]. After initial reading, I understand that I need to write an Upstream (proxy) handler. Is this correct? The HTTP proxy module has a scary note that says: > Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. Is this a limitation cause by NGINX's architecture, or is this by design (e.g. for validation of body against headers, etc.)? The bigger problem, however, is that there is no standard interface to application servers for this new WebSocket protocol. There is some discussion[3] on an Apache enhancement request that basically proposes a modification of CGI. Since CGI has already been demonstrated to be a performance problem, I'm looking for an alternate solution, maybe something closer to SCGI? Anyone have suggestions? Thanks, Andr? [1]:?https://github.com/AndreLouisCaron/cwebs [2]:?http://www.evanmiller.org/nginx-modules-guide.html [3]: https://issues.apache.org/bugzilla/show_bug.cgi?id=47485#c13 From anmartin at admin-auf-zeit.de Thu Feb 2 08:20:41 2012 From: anmartin at admin-auf-zeit.de (Andreas Martin) Date: Thu, 02 Feb 2012 09:20:41 +0100 Subject: status 0 on proxy MISS In-Reply-To: <4F28FA51.4090308@admin-auf-zeit.de> References: <4F28FA51.4090308@admin-auf-zeit.de> Message-ID: <4F2A4759.5080701@admin-auf-zeit.de> Hello. I think I found the cause of this error. In the error log, there are corresponding entries with "Too many open files". I'll increase this limit and check if the status 0 occurs again or (hopefully) not. Andreas Am 01.02.2012 09:39, schrieb Andreas Martin: > Hello. > > I'm using nginx as proxy for an apache. > Somtimes, on a MISS, the logged status is 0, instead of 200 (or anything > else). The log format is configured like this: > > log_format cache '[$time_local] $remote_addr - $request_time - ' > '$upstream_cache_status ' > 'Cache-Control: $upstream_http_cache_control ' > 'Expires: $upstream_http_expires ' > '"$request" ($status) ' > '"$http_user_agent" '; > > The value 0 occurs for the variable $status > > Has anyone any idea, what the status 0 means or why it occurs? > > Kind regards > > > Andreas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Thu Feb 2 12:10:17 2012 From: nginx-forum at nginx.us (lockev3.0) Date: Thu, 02 Feb 2012 07:10:17 -0500 Subject: Default_server catch all block not working In-Reply-To: <669ff20b8722106d70658bca002eed4e.NginxMailingListEnglish@forum.nginx.org> References: <4b142fa24c8249c90f455f6736fe5ccf.NginxMailingListEnglish@forum.nginx.org> <669ff20b8722106d70658bca002eed4e.NginxMailingListEnglish@forum.nginx.org> Message-ID: please help this bothering X-File is causing crawlers to index my site with plenty of non-existent subdomains .... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221572,221905#msg-221905 From lists at ruby-forum.com Thu Feb 2 14:33:15 2012 From: lists at ruby-forum.com (eric elg) Date: Thu, 02 Feb 2012 15:33:15 +0100 Subject: rtsp over http response blocked by nginx In-Reply-To: <2acf6c791d17ab41322379746c1e452e@ruby-forum.com> References: <2acf6c791d17ab41322379746c1e452e@ruby-forum.com> Message-ID: <51dff86b3867b3bc6be020201b6a94bf@ruby-forum.com> Hello, Thank's for your reply! Switching off the buffering as suggested allow to get the response of http GET, but as you said, POST requests have got an arbitrary large Content-Length and does not work through nginx. I have now to update my video server to be compliant with the multi-post mode. Eric -- Posted via http://www.ruby-forum.com/. From savages at mozapps.com Thu Feb 2 14:34:57 2012 From: savages at mozapps.com (Shaun Savage) Date: Thu, 02 Feb 2012 06:34:57 -0800 Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: <4F2A9F11.9010507@mozapps.com> I have been looking for websockets support. I have written a server push back end for nginx using the "keep-alive" module. I have one persistent connection for the server push. this connection is an async client control. then each new "command/request" from the client is a new http connection. I use fcgi, because fcgi also requires a "length" a single connection is not possible. websockets would make make my life easier( and faster). with websockets support nginx would become the defacto standard for web applications. because of client/browser security using another port for websockets is undesirable. I am not a nginx expert but I assume that the buffering the full request is a performance issue. i hope to hear more about websockets On 02/01/2012 07:54 PM, Andr? Caron wrote: > Hi all! > > I've implemented the WebSocket wire protocol as an incremental parser, making it > suitable for use in high-end asynchronous servers such as NGINX. The code is > open source (BSD-licenced) and available on GitHub[1]. I plan on using it to > tunnel other protocols (svn and git, in particular) over HTTP. I have a nice > setup that works well on the client side, but I'd like to integrate this with my > existing NGINX stack with includes virtual hosting and a bunch of other stuff (I > can't use another server directly unless I use a non-default port). > > To the best of my understanding, NGINX has no support for WebSockets. The HTTP > proxy module does not support HTTP 1.1 and WebSockets are incompatible with both > SCGI and FastCGI because of the "Content-Length" problem (it is assumed to be 0 > if unspecified). > > I'd like to implement an NGINX module that specifically handles WebSockets so > that I can integrate my tunnel in my NGINX setup. I have absolutely no > experience with the NGINX source code, but I've found a nice guide on writing > NGINX modules[2]. After initial reading, I understand that I need to write an > Upstream (proxy) handler. Is this correct? > > The HTTP proxy module has a scary note that says: > >> Note that when using the HTTP Proxy Module (or even when using FastCGI), the > entire client request will be buffered in nginx before being passed on to the > backend proxied servers. > > Is this a limitation cause by NGINX's architecture, or is this by design > (e.g. for validation of body against headers, etc.)? > > The bigger problem, however, is that there is no standard interface to > application servers for this new WebSocket protocol. There is some > discussion[3] on an Apache enhancement request that basically proposes a > modification of CGI. Since CGI has already been demonstrated to be a > performance problem, I'm looking for an alternate solution, maybe something > closer to SCGI? Anyone have suggestions? > > Thanks, Andr? > > [1]: https://github.com/AndreLouisCaron/cwebs > [2]: http://www.evanmiller.org/nginx-modules-guide.html [3]: > https://issues.apache.org/bugzilla/show_bug.cgi?id=47485#c13 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zzz at zzz.org.ua Thu Feb 2 17:36:51 2012 From: zzz at zzz.org.ua (Alexandr Gomoliako) Date: Thu, 2 Feb 2012 19:36:51 +0200 Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: On 2/2/12, Andr? Caron wrote: > NGINX modules[2]. After initial reading, I understand that I need to write > an Upstream (proxy) handler. Is this correct? Not really. > The HTTP proxy module has a scary note that says: > >> Note that when using the HTTP Proxy Module (or even when using FastCGI), >> the entire client request will be buffered in nginx before being passed on to >> the backend proxied servers. > > Is this a limitation cause by NGINX's architecture, or is this by design > (e.g. for validation of body against headers, etc.)? It just means that you can't use existing upstream modules and upstream interface. > The bigger problem, however, is that there is no standard interface to > application servers for this new WebSocket protocol. There is some > discussion[3] on an Apache enhancement request that basically proposes a > modification of CGI. Since CGI has already been demonstrated to be a > performance problem, I'm looking for an alternate solution, maybe something > closer to SCGI? Anyone have suggestions? I think what you need here is a simple protocol upgrade functionality that switches to tcp proxying for particular connection once it encounters upgrade in connection header. And everything else is up to application server. So you don't really need to parse websocket protocol in nginx unless it is your application server. From nginx-forum at nginx.us Thu Feb 2 21:32:12 2012 From: nginx-forum at nginx.us (EliteMossy) Date: Thu, 02 Feb 2012 16:32:12 -0500 Subject: Optimizing and cleaning config file Message-ID: <4c0cab040bea4f23074a5de77ac82205.NginxMailingListEnglish@forum.nginx.org> Can someone help optimize and clean my config file Im sure it can be made better, also would like it so its easier to add other servers to it http://pastebin.com/eUk2Vwrq Thanks Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221913,221913#msg-221913 From nginx-forum at nginx.us Fri Feb 3 10:01:35 2012 From: nginx-forum at nginx.us (maxxer) Date: Fri, 03 Feb 2012 05:01:35 -0500 Subject: running phpmyadmin on non-standard dir Message-ID: hi. I'm new to Nginx. I'd like to make serve phpmyadmin on a non-standard url. I'm on debian, and following this howto works great: http://rubyist-journal.com/2010/02/28/howto-nginx-php5-mysql-phpmyadmin-ubuntu-shortest-setup/ but if I change it this way: location /pma { root /usr/share/phpmyadmin/; index index.php index.html index.htm; location ~ ^/(.+\.php)$ { try_files $uri =404; root /usr/share/phpmyadmin/; fastcgi_pass backend; fastcgi_param HTTPS $fastcgi_https; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$request_filename; include /etc/nginx/fastcgi_params; } } I just get 404. 2 questions: * how to make it properly work? * how to debug these config errors? thanks a lot! maxxer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221927,221927#msg-221927 From edho at myconan.net Fri Feb 3 10:38:38 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 3 Feb 2012 17:38:38 +0700 Subject: running phpmyadmin on non-standard dir In-Reply-To: References: Message-ID: (complete answer on the bottom) On Fri, Feb 3, 2012 at 5:01 PM, maxxer wrote: > hi. I'm new to Nginx. > I'd like to make serve phpmyadmin on a non-standard url. There's no "standard url". Learn how "root" (and "alias") directive work. > * how to make it properly work? http://wiki.nginx.org/HttpCoreModule http://wiki.nginx.org/HttpFcgiModule > * how to debug these config errors? http://wiki.nginx.org/HttpLogModule http://wiki.nginx.org/CoreModule location = /pma { rewrite ^ /pma/ permanent; } location /pma/ { location /pma/(.*\.php)$ { set $script_filename /usr/share/phpmyadmin/$1; # testing file existence for security. # sadly in this case we can't use try_files since we're testing against actual file on disk # while try_files tries against uri if (!-f $script_filename) { return 404; } include fastcgi_params; fastcgi_param SCRIPT_FILENAME $script_filename; fastcgi_param HTTPS $fastcgi_https; fastcgi_pass backend; } location ~ ^/pma(|/.*)$ { alias /usr/share/phpmyadmin/$1; index index.php; } } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From ft at falkotimme.com Fri Feb 3 10:53:40 2012 From: ft at falkotimme.com (Falko Timme) Date: Fri, 3 Feb 2012 11:53:40 +0100 Subject: running phpmyadmin on non-standard dir References: Message-ID: <95EB815B298D42EE83BB99300F753FEF@notebook> http://www.howtoforge.com/running-phpmyadmin-on-nginx-lemp-on-debian-squeeze-ubuntu-11.04 Works with cgi.fix_pathinfo on and off. ----- Original Message ----- From: "maxxer" To: Sent: Friday, February 03, 2012 11:01 AM Subject: running phpmyadmin on non-standard dir > hi. I'm new to Nginx. > I'd like to make serve phpmyadmin on a non-standard url. > I'm on debian, and following this howto works great: > http://rubyist-journal.com/2010/02/28/howto-nginx-php5-mysql-phpmyadmin-ubuntu-shortest-setup/ > > but if I change it this way: > > location /pma { > root /usr/share/phpmyadmin/; > index index.php index.html index.htm; > location ~ ^/(.+\.php)$ { > try_files $uri =404; > root /usr/share/phpmyadmin/; > fastcgi_pass backend; > fastcgi_param HTTPS $fastcgi_https; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$request_filename; > include /etc/nginx/fastcgi_params; > } > } > > I just get 404. > 2 questions: > * how to make it properly work? > * how to debug these config errors? > > thanks a lot! > maxxer > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,221927,221927#msg-221927 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Feb 3 10:59:41 2012 From: nginx-forum at nginx.us (maxxer) Date: Fri, 03 Feb 2012 05:59:41 -0500 Subject: running phpmyadmin on non-standard dir In-Reply-To: <95EB815B298D42EE83BB99300F753FEF@notebook> References: <95EB815B298D42EE83BB99300F753FEF@notebook> Message-ID: Falko Timme Wrote: ------------------------------------------------------- > http://www.howtoforge.com/running-phpmyadmin-on-ng > inx-lemp-on-debian-squeeze-ubuntu-11.04 > > Works with cgi.fix_pathinfo on and off. thanks Falko, I'm usually a great fan of your howtos, but I with to serve phpmyadmin on the url /pma instead of /phpmyadmin, and I wasn't able to tweak the config to do this... Edho thanks, but your config makes the /pma url redirect to /phpmyadmin, and download the source of index.php. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221927,221934#msg-221934 From edho at myconan.net Fri Feb 3 11:13:14 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 3 Feb 2012 18:13:14 +0700 Subject: running phpmyadmin on non-standard dir In-Reply-To: References: <95EB815B298D42EE83BB99300F753FEF@notebook> Message-ID: On Fri, Feb 3, 2012 at 5:59 PM, maxxer wrote: > > Edho thanks, but your config makes the /pma url redirect to /phpmyadmin, > and download the source of index.php. > Sorry, I forgot a tilde. location ~ /pma/(.*\.php)$ { -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Fri Feb 3 14:24:54 2012 From: nginx-forum at nginx.us (blindlf) Date: Fri, 03 Feb 2012 09:24:54 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> Message-ID: I'm facing the same problem, the Windows auth cannot work with nginx. I care about the solution. How you resolve this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,72871,221944#msg-221944 From nginx-forum at nginx.us Fri Feb 3 15:15:32 2012 From: nginx-forum at nginx.us (cn_nginxer) Date: Fri, 03 Feb 2012 10:15:32 -0500 Subject: How to Windows auth working on nginx reverse proxy ??? In-Reply-To: References: <20120109140615.GE67687@mdounin.ru> Message-ID: <251d2d0b363c3b612e11a2ca77924613.NginxMailingListEnglish@forum.nginx.org> Hello, Actually there is no solution for time being, what I did was, I use digest authentication instead. But my case, if NTLM is abandoned the user need to enter password every time he log into the system, so you need to put that into your account. Should you have any further questions do not hesitate to come back to me. cn_nginxer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,72871,221948#msg-221948 From edho at myconan.net Fri Feb 3 15:26:07 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 3 Feb 2012 22:26:07 +0700 Subject: Is this how variable (set $var) inheritance works? Message-ID: (doesn't seem to be specified in documentation) My test showed: - anything set in server { } block is inherited: server { set $something /usr/share/something; ... location / { location /nested/ { # $something is set to /usr/share/something root $something; } } } - anything set in location { } block is not inherited: server { ... location /~ { location ~ ^/~([^/]+)(|/.*)$ { set $userfile /home/$1/public_html/$2; alias $userfile; location ~ \.php$ { include fastcgi_params; # $userfile is empty here fastcgi_param SCRIPT_FILENAME $userfile; fastcgi_pass 127.0.0.1:9000; } } } Is it correct? -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From caldcv at gmail.com Fri Feb 3 16:10:59 2012 From: caldcv at gmail.com (Chris) Date: Fri, 3 Feb 2012 11:10:59 -0500 Subject: running phpmyadmin on non-standard dir In-Reply-To: References: <95EB815B298D42EE83BB99300F753FEF@notebook> Message-ID: If you are inexperienced, do not run phpmyadmin publically as /phpmyadmin or you will fall behind a security update to find your system compromised (and now the new member in the botnet!) I used to hunt botnets for awhile and PhpMyAdmin was a common way to get in From appa at perusio.net Fri Feb 3 16:36:23 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 03 Feb 2012 16:36:23 +0000 Subject: running phpmyadmin on non-standard dir In-Reply-To: References: <95EB815B298D42EE83BB99300F753FEF@notebook> Message-ID: <87fwervpq0.wl%appa@perusio.net> On 3 Fev 2012 16h10 WET, caldcv at gmail.com wrote: > If you are inexperienced, do not run phpmyadmin publically as > /phpmyadmin or you will fall behind a security update to find your > system compromised (and now the new member in the botnet!) I used to > hunt botnets for awhile and PhpMyAdmin was a common way to get in Yep. There's a FD post by the Gentoo security team that exposes what an utter complete wreck security wise phpmyadmin is: http://seclists.org/fulldisclosure/2012/Jan/39 Use Chive: http://www.chive-project.com Don't forget to set: cgi.fix_pathinfo = 0 on the php.ini. You're gaining something in security terms by choosing Nginx over Apache, don't throw that under a bus by using phpmyadmin. --- appa From mdounin at mdounin.ru Fri Feb 3 18:10:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Feb 2012 22:10:24 +0400 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: Message-ID: <20120203181024.GS67687@mdounin.ru> Hello! On Fri, Feb 03, 2012 at 10:26:07PM +0700, Edho Arief wrote: > (doesn't seem to be specified in documentation) > > My test showed: > - anything set in server { } block is inherited: > > server { > set $something /usr/share/something; > ... > location / { > location /nested/ { > # $something is set to /usr/share/something > root $something; > } > } > } > > - anything set in location { } block is not inherited: > > server { > ... > location /~ { > location ~ ^/~([^/]+)(|/.*)$ { > set $userfile /home/$1/public_html/$2; > alias $userfile; > location ~ \.php$ { > include fastcgi_params; > # $userfile is empty here > fastcgi_param SCRIPT_FILENAME $userfile; > fastcgi_pass 127.0.0.1:9000; > } > } > } > > > > Is it correct? Yes, this is expected behavrious. Rewrite module directives are never inherited, and directives specified inside location are only executed if this exact location matches. See here for details: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html Maxim Dounin From edho at myconan.net Fri Feb 3 18:48:19 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 4 Feb 2012 01:48:19 +0700 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: <20120203181024.GS67687@mdounin.ru> References: <20120203181024.GS67687@mdounin.ru> Message-ID: On Sat, Feb 4, 2012 at 1:10 AM, Maxim Dounin wrote: > > Yes, this is expected behavrious. ?Rewrite module directives are > never inherited, and directives specified inside location are only > executed if this exact location matches. > > See here for details: > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html > I replaced the line: -fastcgi_param SCRIPT_FILENAME $userfile; +fastcgi_param SCRIPT_FILENAME /home/edho/public_html/test.php; But I still see this: 2012/02/03 13:30:16 [warn] 12889#0: *17 using uninitialized "userfile" variable, client: 118.136.36.164, server: localhost, request: "GET /~edho/test.php HTTP/1.1", host: "yutsuki.myconan.net" Why did I get warning for using uninitialized variables even though it's not specified at all in the relevant location block? Additionally: Is there any variable I can use for my case? $request_filename returned "/opt/nginx/" (bug?). Or do I have use separate block when using alias (ie. nothing is inherited when using alias)? -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From edho at myconan.net Fri Feb 3 18:50:55 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 4 Feb 2012 01:50:55 +0700 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: <20120203181024.GS67687@mdounin.ru> Message-ID: On Sat, Feb 4, 2012 at 1:48 AM, Edho Arief wrote: > On Sat, Feb 4, 2012 at 1:10 AM, Maxim Dounin wrote: >> >> Yes, this is expected behavrious. ?Rewrite module directives are >> never inherited, and directives specified inside location are only >> executed if this exact location matches. >> >> See here for details: >> http://nginx.org/en/docs/http/ngx_http_rewrite_module.html >> > > I replaced the line: > > ? ? ?-fastcgi_param SCRIPT_FILENAME $userfile; > ? ? ?+fastcgi_param SCRIPT_FILENAME /home/edho/public_html/test.php; > > > But I still see this: > > 2012/02/03 13:30:16 [warn] 12889#0: *17 using uninitialized "userfile" > variable, client: 118.136.36.164, server: localhost, request: "GET > /~edho/test.php HTTP/1.1", host: "yutsuki.myconan.net" > > Why did I get warning for using uninitialized variables even though > it's not specified at all in the relevant location block? > After a bit more thinking, does it mean that the "alias $userfile" from parent location block was inherited but the $userfile was passed literally? -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From edho at myconan.net Fri Feb 3 18:59:21 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 4 Feb 2012 01:59:21 +0700 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: <20120203181024.GS67687@mdounin.ru> Message-ID: On Sat, Feb 4, 2012 at 1:50 AM, Edho Arief wrote: >> Why did I get warning for using uninitialized variables even though >> it's not specified at all in the relevant location block? >> > > After a bit more thinking, does it mean that the "alias $userfile" > from parent location block was inherited but the $userfile was passed > literally? > More self-reply: it certainly looks like the $userfile is passed but still in variable, not expanded with the contents as the following test showed. Skipping "set $userfile ..." and put the capture in alias directly: location ~ ^/~([^/]+)(|/.*)$ { alias /home/$1/public_html/$2; location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; } } resulted in this: 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: "REDIRECT_STATUS: 200" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "/home/" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script capture: "" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "/public_html/" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script capture: "" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "SCRIPT_FILENAME" 2012/02/03 13:51:28 [debug] 14951#0: *13 http script var: "/home//public_html/" 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: "SCRIPT_FILENAME: /home//public_html/" 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: "HTTP_HOST: yutsuki.myconan.net" Which explains why I got "/opt/nginx/" for $request_filename on previous config - the \.php$ location block was basically using `alias $userfile` but as $userfile was empty, it became empty alias for that block. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Fri Feb 3 19:16:28 2012 From: nginx-forum at nginx.us (mschipperheyn) Date: Fri, 03 Feb 2012 14:16:28 -0500 Subject: Redirecting based on cookie value Message-ID: <2f863c8df6acb8219275d7bc2cf549c9.NginxMailingListEnglish@forum.nginx.org> Hi, I'm want to redirect the user based on the value of a cookie, if present. The scenario is one where there is a "starter" homepage for first time users. As soon as users login, I write a cookie with the relative path of another homepage. If the cookie is present, I want to redirect the user to the cookie value: the relative path. There are multiple cookies present in the request, such as session cookies. I could of course, let the homepage be determined on the session on the app server side, but since this is the landing page, I prefer to support it with nginx reverse proxy caching. Hence the idea of using cookies to redirect the user once he has logged in once. Any ideas? Marc Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221965,221965#msg-221965 From appa at perusio.net Fri Feb 3 19:27:09 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 03 Feb 2012 19:27:09 +0000 Subject: Redirecting based on cookie value In-Reply-To: <2f863c8df6acb8219275d7bc2cf549c9.NginxMailingListEnglish@forum.nginx.org> References: <2f863c8df6acb8219275d7bc2cf549c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87bopfvhte.wl%appa@perusio.net> On 3 Fev 2012 19h16 WET, nginx-forum at nginx.us wrote: > Hi, > > I'm want to redirect the user based on the value of a cookie, if > present. > > The scenario is one where there is a "starter" homepage for first > time users. As soon as users login, I write a cookie with the > relative path of another homepage. If the cookie is present, I want > to redirect the user to the cookie value: the relative path. There > are multiple cookies present in the request, such as session > cookies. > > I could of course, let the homepage be determined on the session on > the app server side, but since this is the landing page, I prefer to > support it with nginx reverse proxy caching. Hence the idea of using > cookies to redirect the user once he has logged in once. Use map: http://nginx.org/en/docs/http/ngx_http_map_module.html Here's a gist that implements a system where a user is redirected to a HTTPS host if there's a session cookie. This is "blind" in the sense that it doesn't capture any value of the cookie. But you can do that easily with named captures in map. https://gist.github.com/1695505 > Any ideas? This may be helpful to you. --- appa From nginxyz at mail.ru Sat Feb 4 01:15:02 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 04 Feb 2012 05:15:02 +0400 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: Message-ID: 03 ??????? 2012, 22:59 ?? Edho Arief : > On Sat, Feb 4, 2012 at 1:50 AM, Edho Arief wrote: > >> Why did I get warning for using uninitialized variables even though > >> it's not specified at all in the relevant location block? > >> > > > > After a bit more thinking, does it mean that the "alias $userfile" > > from parent location block was inherited but the $userfile was passed > > literally? > > > > More self-reply: it certainly looks like the $userfile is passed but > still in variable, not expanded with the contents as the following > test showed. > > Skipping "set $userfile ..." and put the capture in alias directly: > > location ~ ^/~([^/]+)(|/.*)$ { > alias /home/$1/public_html/$2; > location ~ \.php$ { > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_pass 127.0.0.1:9000; > } > } > > resulted in this: > > 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: "REDIRECT_STATUS: 200" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "/home/" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script capture: "" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "/public_html/" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script capture: "" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script copy: "SCRIPT_FILENAME" > 2012/02/03 13:51:28 [debug] 14951#0: *13 http script var: "/home//public_html/" > 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: > "SCRIPT_FILENAME: /home//public_html/" > 2012/02/03 13:51:28 [debug] 14951#0: *13 fastcgi param: "HTTP_HOST: > yutsuki.myconan.net" > > Which explains why I got "/opt/nginx/" for $request_filename on > previous config - the \.php$ location block was basically using `alias > $userfile` but as $userfile was empty, it became empty alias for that > block. Nested location blocks inherit variables only by name, which means the inherited variables are only created and marked as uninitialized. Note that this is not the same as creating an empty variable that contains "". The distinction should be similar to the one between defined vs. empty variables in Perl: if you set $variable to "", then 'if ($variable)' would evaluate to false, while 'if (defined($variable))' would evaluate to true. This distinction exists in nginx as well, where 'if ($variable)' would also evaluate to false, but there's no direct way to test whether a variable is defined. However nginx tests for this automatically - if you try to use an undefined variable, nginx will report an 'unknown "" variable' error and fail to start. That's why a variable that was set to a certain value inside the outermost (parent) location block will be inherited by nested location blocks only by name, which means that if you try to use such a variable inside a nested location block, nginx won't report any errors, but instead of the original contents of the variable, you will only get an empty string when you try to access the contents. This feature/bug is especially confusing when you use variables inside root and alias directives, because the nested location blocks will inherit the root and alias contents (unless specifically set), which will have any uninitialized inherited variable names replaced with "", so "/home/$variable/abc/$dir/" would become "/home//abc//". Another important thing to remember is that if you set root or alias to contain nothing but a variable (root $home;), then the root inside any nested location block where you do not set root or alias specifically, will be reset NOT to your default root value from the server configuration block, but to the value of the prefix configure argument that nginx was compiled with (/usr/local/etc/nginx), which you can see under --prefix= when you run nginx -V. Your nginx was probably compiled with --prefix=/opt/nginx/. Max From edho at myconan.net Sat Feb 4 02:53:22 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 4 Feb 2012 09:53:22 +0700 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: Message-ID: On Sat, Feb 4, 2012 at 8:15 AM, Max wrote: > > This feature/bug is especially confusing when you use variables > inside root and alias directives, because the nested location blocks > will inherit the root and alias contents (unless specifically set), > which will have any uninitialized inherited variable names replaced > with "", so "/home/$variable/abc/$dir/" would become "/home//abc//". > At least this makes nested location useless for cases like this. Instead of one regex with captures (and then use nested location), I had to do this instead: location /~ { location ~ ^/~([^/]+)/(.+\.php)$ { alias /home/$1/public_html/$2; if (!-f $request_filename) { return 404; } include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; } location ~ ^/~([^/]+)(|/.*)$ { alias /home/$1/public_html/$2; index index.html; } } Or use map (and since it's currently impossible to do non-simple regex capture, I had to use two maps): map $uri $user { ~^/~(?P[^/]+)(|/.*)$ $user1; } map $uri $file { ~^/~[^/]+(?P|/.*)$ $file1; } server { ... location /~ { location ~ ^ { alias /home/$user/public_html/$file; location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; } } } } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From appa at perusio.net Sat Feb 4 03:22:32 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 04 Feb 2012 03:22:32 +0000 Subject: Is this how variable (set $var) inheritance works? In-Reply-To: References: Message-ID: <878vkjuvt3.wl%appa@perusio.net> On 4 Fev 2012 02h53 WET, edho at myconan.net wrote: > On Sat, Feb 4, 2012 at 8:15 AM, Max wrote: >> >> This feature/bug is especially confusing when you use variables >> inside root and alias directives, because the nested location >> blocks will inherit the root and alias contents (unless >> specifically set), which will have any uninitialized inherited >> variable names replaced with "", so "/home/$variable/abc/$dir/" >> would become "/home//abc//". >> > > At least this makes nested location useless for cases like this. > Instead of one regex with captures (and then use nested location), I > had to do this instead: > > location /~ { > location ~ ^/~([^/]+)/(.+\.php)$ { > alias /home/$1/public_html/$2; > if (!-f $request_filename) { return 404; } > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_pass 127.0.0.1:9000; > } > location ~ ^/~([^/]+)(|/.*)$ { > alias /home/$1/public_html/$2; > index index.html; > } > } The thing is that there is a server level rewrite phase and a location level rewrite phase. And since you can have only *one and only one* location everything at the location rewrite phase that happens before a redirect/rewrite gets lost. Why? Because you enter a configuration find phase again and after that a rewrite phase again on the location it was redirected to. The server level rewrite only happens once in each vhost hence the values of user variables set there are preserved. That's my understanding of it, at least. --- appa > Or use map (and since it's currently impossible to do non-simple > regex capture, I had to use two maps): > > map $uri $user { > ~^/~(?P[^/]+)(|/.*)$ $user1; > } > map $uri $file { > ~^/~[^/]+(?P|/.*)$ $file1; > } > server { > ... > location /~ { > location ~ ^ { > alias /home/$user/public_html/$file; > location ~ \.php$ { > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_pass 127.0.0.1:9000; > } > } > } > } From szun at informatik.hu Sat Feb 4 16:59:36 2012 From: szun at informatik.hu (vuki) Date: Sat, 04 Feb 2012 17:59:36 +0100 Subject: internal server error Message-ID: <4F2D63F8.9050602@informatik.hu> Hi all! I have got a Debian 6.0.3 server, with nginx 1.0.11 and php5-fpm 5.3.10. I have a virtualhost using PHPBB. I randomly get Internal Server Error 500 for this virtualhost. There is no trace of any error in nginx error log, nor php-fpm error log, nor php.log. How. where can i find what is causing my problem? Thx. From webmaster at bigdinosaur.org Sat Feb 4 17:01:31 2012 From: webmaster at bigdinosaur.org (BigdinoWebmaster) Date: Sat, 4 Feb 2012 11:01:31 -0600 Subject: internal server error In-Reply-To: <4F2D63F8.9050602@informatik.hu> References: <4F2D63F8.9050602@informatik.hu> Message-ID: Are you running nginx or php-fpm in a chroot jail? If so, not having all your dependencies inside the chroot can result in strange 500 errors without any log entries. -- Bigdinosaur.org Webmaster Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, February 4, 2012 at 10:59 AM, vuki wrote: > Hi all! > > I have got a Debian 6.0.3 server, with nginx 1.0.11 and php5-fpm 5.3.10. > I have a virtualhost using PHPBB. I randomly get Internal Server Error > 500 for this virtualhost. > There is no trace of any error in nginx error log, nor php-fpm error > log, nor php.log. > > How. where can i find what is causing my problem? > > Thx. > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From szun at informatik.hu Sat Feb 4 17:05:33 2012 From: szun at informatik.hu (vuki) Date: Sat, 04 Feb 2012 18:05:33 +0100 Subject: internal server error In-Reply-To: References: <4F2D63F8.9050602@informatik.hu> Message-ID: <4F2D655D.30909@informatik.hu> I am not using chroot jail. On 2012.02.04. 18:01, BigdinoWebmaster wrote: > Are you running nginx or php-fpm in a chroot jail? If so, not having > all your dependencies inside the chroot can result in strange 500 > errors without any log entries. > > -- > Bigdinosaur.org Webmaster > Sent with Sparrow > > On Saturday, February 4, 2012 at 10:59 AM, vuki wrote: > >> Hi all! >> >> I have got a Debian 6.0.3 server, with nginx 1.0.11 and php5-fpm 5.3.10. >> I have a virtualhost using PHPBB. I randomly get Internal Server Error >> 500 for this virtualhost. >> There is no trace of any error in nginx error log, nor php-fpm error >> log, nor php.log. >> >> How. where can i find what is causing my problem? >> >> Thx. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From szun at informatik.hu Sat Feb 4 18:33:31 2012 From: szun at informatik.hu (vuki) Date: Sat, 04 Feb 2012 19:33:31 +0100 Subject: php-fpm response codes Message-ID: <4F2D79FB.6060806@informatik.hu> Hi! Does anyone know where to find information about php-fpm response codes? I am interested in 302, looks like this response linked with my internal server error. thx! From quan.nexthop at gmail.com Sun Feb 5 05:02:12 2012 From: quan.nexthop at gmail.com (Geoge.Q) Date: Sun, 5 Feb 2012 13:02:12 +0800 Subject: A topology when nginx is in reverse-proxy mode? support ? Message-ID: hi all: please see the following topology in my test-bed, it always accesses the first website in reverse-proxy. 1. Topology Outside -------------------[ NAT Device] -----------------[nginx with reverse-proxy]----------------Web1 (1.1.1.1:80) [http://2.2.2.2:8000 | | |______Web2 (1.1.1.2:80) [http://2.2.2.2:8001 2.2.2.2 1.1.1.255 2. How to access (1) Access http://2.2.2.2:8000 from outside to access web1; Access http://2.2.2.2:8001 from outside to access web2; (2) NAT device translated 2.2.2.2 to different internal IP address according to port; http://2.2.2.2:8000 =====NAT===> http://1.1.1.1(web1); http://2.2.2.2:8001 =====NAT===> http://1.1.1.2(web2); (3) NGINX act as reverse proxy; 3. issue We configure nginx as reverse proxy, but it always proxy (http://1.1.1.1and http:/ 1.1.1.2) to http://1.1.1.1; nginx configure is as following server { listen 80; server_name 2.2.2.2; // (try 2.2.2.2:8000, it failed) location / { proxy_pass http://1.1.1.1; # <==========Web1 .... } } server { listen 80; server_name 2.2.2.2; # (try 2.2.2.2:8000, it failed) location / { proxy_pass http: #1.1.1.2; # <==========Web2 .... } } 4. I try to change the configuration, it is failed. My configuration is good ? Is the topology supported? thanks George. Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Feb 5 11:07:05 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 5 Feb 2012 15:07:05 +0400 Subject: A topology when nginx is in reverse-proxy mode? support ? In-Reply-To: References: Message-ID: <20120205110705.GT67687@mdounin.ru> Hello! On Sun, Feb 05, 2012 at 01:02:12PM +0800, Geoge.Q wrote: > hi all: > > please see the following topology in my test-bed, it always accesses the > first website in reverse-proxy. > > 1. Topology > > Outside -------------------[ NAT Device] -----------------[nginx with > reverse-proxy]----------------Web1 (1.1.1.1:80) > [http://2.2.2.2:8000 | > | |______Web2 (1.1.1.2:80) > [http://2.2.2.2:8001 2.2.2.2 > 1.1.1.255 > > 2. How to access > > (1) Access http://2.2.2.2:8000 from outside to access web1; > Access http://2.2.2.2:8001 from outside to access web2; > (2) NAT device translated 2.2.2.2 to different internal IP address > according to port; > http://2.2.2.2:8000 =====NAT===> http://1.1.1.1(web1); > http://2.2.2.2:8001 =====NAT===> http://1.1.1.2(web2); > (3) NGINX act as reverse proxy; > > 3. issue > We configure nginx as reverse proxy, but it always proxy > (http://1.1.1.1and http:/ > 1.1.1.2) to http://1.1.1.1; > > nginx configure is as following > > server { > listen 80; > server_name 2.2.2.2; // (try 2.2.2.2:8000, it failed) > > location / { > proxy_pass http://1.1.1.1; # <==========Web1 > .... > } > } > > server { > listen 80; > server_name 2.2.2.2; # (try 2.2.2.2:8000, it failed) As long as hostnames in requests to different sites match exactly (nginx doesn't look at ports in Host headers, only at hostnames) - you have to use distinct listen sockets on nginx (i.e. distinct ports and/or ips). That is, use something like this: server { listen 1.1.1.1:80; server_name 2.2.2.2; ... } server { listen 1.1.1.2:80; server_name 2.2.2.2; ... } Maxim Dounin From francis at daoine.org Sun Feb 5 11:17:13 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 5 Feb 2012 11:17:13 +0000 Subject: A topology when nginx is in reverse-proxy mode? support ? In-Reply-To: References: Message-ID: <20120205111713.GB22076@craic.sysops.org> On Sun, Feb 05, 2012 at 01:02:12PM +0800, Geoge.Q wrote: Hi there, I'm afraid I'm not able to understand the topology. So I'll make some guesses, and perhaps you can say where I have gone wrong. > 2. How to access > > (1) Access http://2.2.2.2:8000 from outside to access web1; > Access http://2.2.2.2:8001 from outside to access web2; > (2) NAT device translated 2.2.2.2 to different internal IP address > according to port; > http://2.2.2.2:8000 =====NAT===> http://1.1.1.1(web1); > http://2.2.2.2:8001 =====NAT===> http://1.1.1.2(web2); > (3) NGINX act as reverse proxy; So 2.2.2.2 is the address of the NAT device, and it sends any inbound traffic to port 8000, to internal web1:80; and it sends any inbound traffic to port 8001, to internal web2:80? That should just work, with no need for nginx anywhere. So that's presumably not what you want. Perhaps you have nginx on some other internal server, that the NAT device sends the traffic to? Or perhaps nginx is running on the NAT device, so NAT isn't needed at all? > 3. issue > We configure nginx as reverse proxy, but it always proxy > (http://1.1.1.1and http:/ > 1.1.1.2) to http://1.1.1.1; > > nginx configure is as following > > server { > listen 80; That is the port on the nginx server that nginx listens to, and is the port that the traffic to nginx must be sent to. I suspect that it should be 8000 or 8001; but when the network topology is clear, it will be clear what that will be. > server_name 2.2.2.2; // (try 2.2.2.2:8000, it failed) That is the name in the Host: header that the client sends. If more than one nginx server{} listens on the same ip:port, it is used to choose which server{} is used. > server { > listen 80; > server_name 2.2.2.2; # (try 2.2.2.2:8000, it failed) This is the same listen/server_name as the first one, so will never match. > 4. I try to change the configuration, it is failed. > > My configuration is good ? Is the topology supported? Because of listen/server_name, your second server{} block will never be used, so no traffic will go to web2. So: have nginx listening on two different ports, or on two different addresses, or use different Host: names in the requests. (But since I don't see where nginx fits in to the topology in the first place, I guess I must have missed something.) f -- Francis Daly francis at daoine.org From quan.nexthop at gmail.com Sun Feb 5 16:52:12 2012 From: quan.nexthop at gmail.com (Geoge.Q) Date: Mon, 6 Feb 2012 00:52:12 +0800 Subject: A topology when nginx is in reverse-proxy mode? support ? In-Reply-To: <20120205111713.GB22076@craic.sysops.org> References: <20120205111713.GB22076@craic.sysops.org> Message-ID: Thanks Max and Francis. I will try. George.Alex. On Sun, Feb 5, 2012 at 7:17 PM, Francis Daly wrote: > On Sun, Feb 05, 2012 at 01:02:12PM +0800, Geoge.Q wrote: > > Hi there, > > I'm afraid I'm not able to understand the topology. So I'll make some > guesses, and perhaps you can say where I have gone wrong. > > > 2. How to access > > > > (1) Access http://2.2.2.2:8000 from outside to access web1; > > Access http://2.2.2.2:8001 from outside to access web2; > > (2) NAT device translated 2.2.2.2 to different internal IP address > > according to port; > > http://2.2.2.2:8000 =====NAT===> http://1.1.1.1(web1); > > http://2.2.2.2:8001 =====NAT===> http://1.1.1.2(web2); > > (3) NGINX act as reverse proxy; > > So 2.2.2.2 is the address of the NAT device, and it sends any inbound > traffic to port 8000, to internal web1:80; and it sends any inbound > traffic to port 8001, to internal web2:80? > > That should just work, with no need for nginx anywhere. > > So that's presumably not what you want. > > Perhaps you have nginx on some other internal server, that the NAT device > sends the traffic to? Or perhaps nginx is running on the NAT device, > so NAT isn't needed at all? > > > 3. issue > > We configure nginx as reverse proxy, but it always proxy > > (http://1.1.1.1and http:/ > > 1.1.1.2) to http://1.1.1.1; > > > > nginx configure is as following > > > > server { > > listen 80; > > That is the port on the nginx server that nginx listens to, and is > the port that the traffic to nginx must be sent to. I suspect that it > should be 8000 or 8001; but when the network topology is clear, it will > be clear what that will be. > > > server_name 2.2.2.2; // (try 2.2.2.2:8000, it failed) > > That is the name in the Host: header that the client sends. If more than > one nginx server{} listens on the same ip:port, it is used to choose > which server{} is used. > > > server { > > listen 80; > > server_name 2.2.2.2; # (try 2.2.2.2:8000, it failed) > > This is the same listen/server_name as the first one, so will never match. > > > 4. I try to change the configuration, it is failed. > > > > My configuration is good ? Is the topology supported? > > Because of listen/server_name, your second server{} block will never be > used, so no traffic will go to web2. > > So: have nginx listening on two different ports, or on two different > addresses, or use different Host: names in the requests. > > (But since I don't see where nginx fits in to the topology in the first > place, I guess I must have missed something.) > > f > -- > Francis Daly francis at daoine.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.vanarragon at lukkien.com Mon Feb 6 13:56:56 2012 From: j.vanarragon at lukkien.com (Jaap van Arragon) Date: Mon, 06 Feb 2012 14:56:56 +0100 Subject: Nginx as LB and satisfy Any In-Reply-To: <20120205110705.GT67687@mdounin.ru> Message-ID: Hello, Is there a possibility to use the satisfy any option with NGINX as a loadbalancer? We normally use "satisfy any" on our apache webservers but for authentication/caching reasons we want to put it in the NGINX LB config. Is there a way to make the location directive accept and pass on the "satisfy any" option to the webservers? Just for your information, we use NGINX purely as a loadbalancer. Thank you. Regards, Jaap van Arragon From mdounin at mdounin.ru Mon Feb 6 14:47:56 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Feb 2012 18:47:56 +0400 Subject: nginx-1.0.12 Message-ID: <20120206144756.GF67687@mdounin.ru> Changes with nginx 1.0.12 06 Feb 2012 *) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the "ssl_protocols" directive. *) Feature: the "if" SSI command supports captures in regular expressions. *) Bugfix: the "if" SSI command did not work inside the "block" command. *) Bugfix: in AIO error handling on FreeBSD. *) Bugfix: in the OpenSSL library initialization. *) Bugfix: the "worker_cpu_affinity" directive might not work. *) Bugfix: the "limit_conn_log_level" and "limit_req_log_level" directives might not work. *) Bugfix: the "read_ahead" directive might not work combined with "try_files" and "open_file_cache". *) Bugfix: the "proxy_cache_use_stale" directive with "error" parameter did not return answer from cache if there were no live upstreams. *) Bugfix: a segmentation fault might occur in a worker process if small time was used in the "inactive" parameter of the "proxy_cache_path" directive. *) Bugfix: responses from cache might hang. *) Bugfix: in error handling while connecting to a backend. Thanks to Piotr Sikora. *) Bugfix: in the "epoll" event method. Thanks to Yichun Zhang. *) Bugfix: the $sent_http_cache_control variable might contain a wrong value if the "expires" directive was used. Thanks to Yichun Zhang. *) Bugfix: the "limit_rate" directive did not allow to use full throughput, even if limit value was very high. *) Bugfix: the "sendfile_max_chunk" directive did not work, if the "limit_rate" directive was used. *) Bugfix: nginx could not be built on Solaris; the bug had appeared in 1.0.11. *) Bugfix: in the ngx_http_scgi_module. *) Bugfix: in the ngx_http_mp4_module. Maxim Dounin From contact at jpluscplusm.com Mon Feb 6 15:03:14 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 6 Feb 2012 15:03:14 +0000 Subject: DNS TTLs being ignored In-Reply-To: References: <72FF6524-75CF-4123-8F83-50363C25AE21@nginx.com> Message-ID: On 16 November 2011 14:00, Andrew Alexeev wrote: > On Nov 15, 2011, at 1:50 PM, Andrew Alexeev wrote: > >> On Nov 3, 2011, at 1:50 PM, Andrew Alexeev wrote: >> >>> Noah, >>> >>> This fix/improvement be introduced in 1.1.8 which will come out around Nov 14. >> >> Apologies, it didn't get in either 1.1.8 (yesterday) or 1.1.10 (today). It's almost ready and would hopefully get into the next dev and stable releases in a couple of weeks. > > Jfyi, it went committed today > > http://mailman.nginx.org/pipermail/nginx-devel/2011-November/001466.html > http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver > > and will be included in 1.1.9. You mentioned it'd be in stable at some point. I can't find it in any subsequent 1.0.x announcement - could you clarify the status of this feature in stable please? Many thanks, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Mon Feb 6 15:12:30 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Feb 2012 19:12:30 +0400 Subject: DNS TTLs being ignored In-Reply-To: References: <72FF6524-75CF-4123-8F83-50363C25AE21@nginx.com> Message-ID: <20120206151230.GJ67687@mdounin.ru> Hello! On Mon, Feb 06, 2012 at 03:03:14PM +0000, Jonathan Matthews wrote: > On 16 November 2011 14:00, Andrew Alexeev wrote: > > On Nov 15, 2011, at 1:50 PM, Andrew Alexeev wrote: > > > >> On Nov 3, 2011, at 1:50 PM, Andrew Alexeev wrote: > >> > >>> Noah, > >>> > >>> This fix/improvement be introduced in 1.1.8 which will come out around Nov 14. > >> > >> Apologies, it didn't get in either 1.1.8 (yesterday) or 1.1.10 (today). It's almost ready and would hopefully get into the next dev and stable releases in a couple of weeks. > > > > Jfyi, it went committed today > > > > http://mailman.nginx.org/pipermail/nginx-devel/2011-November/001466.html > > http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver > > > > and will be included in 1.1.9. > > You mentioned it'd be in stable at some point. > I can't find it in any subsequent 1.0.x announcement - could you > clarify the status of this feature in stable please? It's not in 1.0.x, and probably won't be. The 1.1.x branch is expected to become stable in near future. Maxim Dounin From nginxyz at mail.ru Mon Feb 6 16:15:59 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 06 Feb 2012 20:15:59 +0400 Subject: Autoindex module extension [patch] Message-ID: Hello, I have written an extension for the Autoindex module that adds two new directives. I've included the patch against the latest 1.1.14 version of nginx, as well as the documentation for the Wiki. Max autoindex_omit_index_of syntax: autoindex_omit_index_of [on|off] default: off context: http, server, location variables: no version: >= 1.1.14 This directive determines whether the "Index of " string is displayed in the title and in the header of a directory listing. It takes one argument (a boolean flag), and is set to off by default, so "Index of " is displayed as usual. autoindex_dirname_to_display syntax: autoindex_dirname_to_display ; default: none context: http, server, location This directive determines the directory name to display next to the "Index of " string in the title and in the header of a directory listing. It takes one argument (a string), which can contain variables. This directive can also be used to remove trailing slashes from a directory pathname. location ~ ^/internal/([^/]+)(|/.*)$ { internal; set $dir "$1 :: $2"; autoindex on; autoindex_omit_index_of on; autoindex_dirname_to_display "-==[ NGiNX & Co :: $dir ]==-"; alias /mnt/nfs/server123/$1/$2; } The example above demonstrates how to display the directory contents of /mnt/nfs/server123/$1/$2 while omitting the "Index of " string, and displaying the following customized string instead of the "Index of /internal/$1/$2/" string in the title and in the header of the directory listing: "-==[ NGiNX & Co :: $1 :: $2 ]==-" location ~ ^/internal/~([^/]+)/(|.*)$ { internal; autoindex on; autoindex_dirname_to_display "nfs:/home/$1/public_html/$2"; alias /mnt/nfs/server123/$1/$2; } The example above demonstrates how to display the directory contents of /mnt/nfs/server123/$1/$2 using the following customized string instead of the "Index of /internal/~$1/$2" string in the title and in the header of the directory listing: "Index of nfs:/home/directory/$1/public_html/$2" -------------- next part -------------- A non-text attachment was scrubbed... Name: patch-dirname_to_display-omit_index_of-ngx_http_autoindex_module.c.20120206.diff Type: application/octet-stream Size: 7349 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Feb 6 16:58:49 2012 From: nginx-forum at nginx.us (doktoreas) Date: Mon, 06 Feb 2012 11:58:49 -0500 Subject: Using alias with variables Message-ID: <4b198e231565f47c47c2c6dfc88ecd7a.NginxMailingListEnglish@forum.nginx.org> Hello everybody, I need to use the alias setting from a request made to /test/str1_str2/, so path become /var/www/data/str1_cache_str2/. I don't understand how to extract str1, str2 and convert to str1_cache_str2. Thank you very much L. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222048,222048#msg-222048 From mdounin at mdounin.ru Mon Feb 6 17:00:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Feb 2012 21:00:24 +0400 Subject: Autoindex module extension [patch] In-Reply-To: References: Message-ID: <20120206170024.GM67687@mdounin.ru> Hello! On Mon, Feb 06, 2012 at 08:15:59PM +0400, Max wrote: > > Hello, > > I have written an extension for the Autoindex module that adds > two new directives. I've included the patch against the latest > 1.1.14 version of nginx, as well as the documentation for the > Wiki. There are no plans to extend autoindex with various customizations. Instead, we'll likely provide xml index to make arbitrary customization via xslt possible. Maxim Dounin From kworthington at gmail.com Mon Feb 6 17:32:58 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 6 Feb 2012 12:32:58 -0500 Subject: nginx-1.0.12 In-Reply-To: <20120206144756.GF67687@mdounin.ru> References: <20120206144756.GF67687@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.0.12 For Windowshttp://goo.gl/vcXmb (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Thank you, Kevin -- Kevin Worthington kworthington *@~ #gmail} [dot) {com] http://www.kevinworthington.com/ On Mon, Feb 6, 2012 at 9:47 AM, Maxim Dounin wrote: > Changes with nginx 1.0.12 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?06 Feb 2012 > > ? ?*) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the > ? ? ? "ssl_protocols" directive. > > ? ?*) Feature: the "if" SSI command supports captures in regular > ? ? ? expressions. > > ? ?*) Bugfix: the "if" SSI command did not work inside the "block" command. > > ? ?*) Bugfix: in AIO error handling on FreeBSD. > > ? ?*) Bugfix: in the OpenSSL library initialization. > > ? ?*) Bugfix: the "worker_cpu_affinity" directive might not work. > > ? ?*) Bugfix: the "limit_conn_log_level" and "limit_req_log_level" > ? ? ? directives might not work. > > ? ?*) Bugfix: the "read_ahead" directive might not work combined with > ? ? ? "try_files" and "open_file_cache". > > ? ?*) Bugfix: the "proxy_cache_use_stale" directive with "error" parameter > ? ? ? did not return answer from cache if there were no live upstreams. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if small > ? ? ? time was used in the "inactive" parameter of the "proxy_cache_path" > ? ? ? directive. > > ? ?*) Bugfix: responses from cache might hang. > > ? ?*) Bugfix: in error handling while connecting to a backend. > ? ? ? Thanks to Piotr Sikora. > > ? ?*) Bugfix: in the "epoll" event method. > ? ? ? Thanks to Yichun Zhang. > > ? ?*) Bugfix: the $sent_http_cache_control variable might contain a wrong > ? ? ? value if the "expires" directive was used. > ? ? ? Thanks to Yichun Zhang. > > ? ?*) Bugfix: the "limit_rate" directive did not allow to use full > ? ? ? throughput, even if limit value was very high. > > ? ?*) Bugfix: the "sendfile_max_chunk" directive did not work, if the > ? ? ? "limit_rate" directive was used. > > ? ?*) Bugfix: nginx could not be built on Solaris; the bug had appeared in > ? ? ? 1.0.11. > > ? ?*) Bugfix: in the ngx_http_scgi_module. > > ? ?*) Bugfix: in the ngx_http_mp4_module. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx at nginxuser.net Mon Feb 6 18:11:46 2012 From: nginx at nginxuser.net (Nginx User) Date: Mon, 6 Feb 2012 21:11:46 +0300 Subject: Nginx 1.0.12 proxy_redirect Message-ID: I was really hoping this feature from the 1.1.11 realse would make it into the next stable release (1.0.12) *) Feature: the "proxy_redirect" directive supports variables in the first parameter. Looks like it didn't make the cut. Any chance of making the next release? The current limitation of variables to only the second parameter makes this feature a bit cumbersome. Because of a number of backend apps that try to redirect, I have to do: # proxy_redirect ? http://site_1:8080/ ? ?http://site_1/; # proxy_redirect ? http://site_2:8080/ ? ?http://site_2/; # proxy_redirect ? http://site_3:8080/ ? ?http://site_3/; ... # proxy_redirect ? http://site_n:8080/ ? ?http://site_n/; in my proxy params file instead of a simple: # proxy_redirect ? http://$host:8080/ ? ?http://$host/; Please consider adding it. Thanks From nginx at nginxuser.net Mon Feb 6 18:15:10 2012 From: nginx at nginxuser.net (Nginx User) Date: Mon, 6 Feb 2012 21:15:10 +0300 Subject: nginx-1.0.12 In-Reply-To: <20120206144756.GF67687@mdounin.ru> References: <20120206144756.GF67687@mdounin.ru> Message-ID: On 6 February 2012 17:47, Maxim Dounin wrote: > Changes with nginx 1.0.12 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?06 Feb 2012 > > ? ?*) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the > ? ? ? "ssl_protocols" directive. > > ? ?*) Feature: the "if" SSI command supports captures in regular > ? ? ? expressions. I was really hoping this feature from the 1.1.11 release would make it into the next stable release (1.0.12) *) Feature: the "proxy_redirect" directive supports variables in the first parameter. Looks like it didn't make the cut. Any chance of making the next release? The current limitation of variables to only the second parameter makes this feature a bit cumbersome. Because of a number of backend apps that try to redirect, I have to do: # proxy_redirect http://site_1:8080/ http://site_1/; # proxy_redirect http://site_2:8080/ http://site_2/; # proxy_redirect http://site_3:8080/ http://site_3/; ... # proxy_redirect http://site_n:8080/ http://site_n/; in my proxy params file instead of a simple: # proxy_redirect http://$host:8080/ http://$host/; Please consider adding it. Thanks From ne at vbart.ru Mon Feb 6 19:37:22 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 6 Feb 2012 23:37:22 +0400 Subject: nginx-1.0.12 In-Reply-To: References: <20120206144756.GF67687@mdounin.ru> Message-ID: <201202062337.23021.ne@vbart.ru> On Monday 06 February 2012 22:15:10 Nginx User wrote: > On 6 February 2012 17:47, Maxim Dounin wrote: > > Changes with nginx 1.0.12 06 Feb > > 2012 > > > > *) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the > > "ssl_protocols" directive. > > > > *) Feature: the "if" SSI command supports captures in regular > > expressions. > > I was really hoping this feature from the 1.1.11 release would make it > into the next stable release (1.0.12) > > *) Feature: the "proxy_redirect" directive supports variables in the > first parameter. > > Looks like it didn't make the cut. Any chance of making the next > release? The current limitation of variables to only the second > parameter makes this feature a bit cumbersome. > [...] If you need some feature from the development branch, then why not just use it? wbr, Valentin V. Bartenev From jim at ohlste.in Mon Feb 6 19:55:54 2012 From: jim at ohlste.in (Jim Ohlstein) Date: Mon, 06 Feb 2012 14:55:54 -0500 Subject: nginx-1.0.12 In-Reply-To: <201202062337.23021.ne@vbart.ru> References: <20120206144756.GF67687@mdounin.ru> <201202062337.23021.ne@vbart.ru> Message-ID: <4F30304A.2030909@ohlste.in> On 2/6/12 2:37 PM, Valentin V. Bartenev wrote: > On Monday 06 February 2012 22:15:10 Nginx User wrote: >> On 6 February 2012 17:47, Maxim Dounin wrote: >>> Changes with nginx 1.0.12 06 Feb >>> 2012 >>> >>> *) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the >>> "ssl_protocols" directive. >>> >>> *) Feature: the "if" SSI command supports captures in regular >>> expressions. >> >> I was really hoping this feature from the 1.1.11 release would make it >> into the next stable release (1.0.12) >> >> *) Feature: the "proxy_redirect" directive supports variables in the >> first parameter. >> >> Looks like it didn't make the cut. Any chance of making the next >> release? The current limitation of variables to only the second >> parameter makes this feature a bit cumbersome. >> > [...] > > If you need some feature from the development branch, then why not just use it? > Or wait about 5 weeks until 1.1 is "stable" (not that it isn't stable now). See http://trac.nginx.org/nginx/roadmap. -- Jim Ohlstein From nginx at nginxuser.net Tue Feb 7 04:14:45 2012 From: nginx at nginxuser.net (Nginx User) Date: Tue, 7 Feb 2012 07:14:45 +0300 Subject: nginx-1.0.12 In-Reply-To: <201202062337.23021.ne@vbart.ru> References: <20120206144756.GF67687@mdounin.ru> <201202062337.23021.ne@vbart.ru> Message-ID: On 6 February 2012 22:37, Valentin V. Bartenev wrote: > If you need some feature from the development branch, then why not just use it? I realise that the dev branch is rated OK for production but prefer to stay with the other because I don't want to break things. I hope this feature makes it into 1.0.13 due in four weeks as I'll need some time to evaluate the 1.1.x branch before deciding whether to move to it even if it does become stable. Cheers From suxsonic at gmail.com Mon Feb 6 13:28:47 2012 From: suxsonic at gmail.com (suxsonic suxsonic) Date: Mon, 6 Feb 2012 18:28:47 +0500 Subject: Undefined symbols for architecture i386 Message-ID: Hello, problems installing Mac OS X 10.6.8 arch i386 pcre 8.21 nginx 1.1.14 make - ok make install: objs/src/http/modules/ngx_http_browser_module.o \ objs/src/http/modules/ngx_http_upstream_ip_hash_module.o \ objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ objs/ngx_modules.o \ -lpcre -lssl -lcrypto -lz Undefined symbols for architecture i386: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture i386 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [install] Error 2 nginx 1.0.11 - ok Thanx, Kakawki Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Tue Feb 7 11:18:40 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 7 Feb 2012 15:18:40 +0400 Subject: Undefined symbols for architecture i386 In-Reply-To: References: Message-ID: <201202071518.40633.ne@vbart.ru> On Monday 06 February 2012 17:28:47 suxsonic suxsonic wrote: > Hello, > problems installing > Mac OS X 10.6.8 arch i386 > pcre 8.21 > nginx 1.1.14 > make - ok > make install: > > > objs/src/http/modules/ngx_http_browser_module.o \ > objs/src/http/modules/ngx_http_upstream_ip_hash_module.o \ > objs/src/http/modules/ngx_http_upstream_keepalive_module.o \ > objs/ngx_modules.o \ > -lpcre -lssl -lcrypto -lz > Undefined symbols for architecture i386: > "_pcre_free_study", referenced from: > _ngx_pcre_free_studies in ngx_regex.o > ld: symbol(s) not found for architecture i386 > collect2: ld returned 1 exit status > make[1]: *** [objs/nginx] Error 1 > make: *** [install] Error 2 > > > nginx 1.0.11 - ok > I assume this is related to your problem: http://trac.nginx.org/nginx/ticket/94#comment:9 wbr, Valentin V. Bartenev From anmartin at admin-auf-zeit.de Tue Feb 7 14:34:19 2012 From: anmartin at admin-auf-zeit.de (Andreas Martin) Date: Tue, 07 Feb 2012 15:34:19 +0100 Subject: [SOLVED] status 0 on proxy MISS In-Reply-To: <4F2A4759.5080701@admin-auf-zeit.de> References: <4F28FA51.4090308@admin-auf-zeit.de> <4F2A4759.5080701@admin-auf-zeit.de> Message-ID: <4F31366B.60003@admin-auf-zeit.de> Hi. After increasing the open files limits on OS level and increasing the values for worker_rlimit_nofile and worker_connections in nginx.conf according to our needs, the error does not longer occur. Regards Andreas Am 02.02.2012 09:20, schrieb Andreas Martin: > Hello. > > I think I found the cause of this error. > In the error log, there are corresponding entries with "Too many open > files". I'll increase this limit and check if the status 0 occurs again > or (hopefully) not. > > > Andreas > > Am 01.02.2012 09:39, schrieb Andreas Martin: >> Hello. >> >> I'm using nginx as proxy for an apache. >> Somtimes, on a MISS, the logged status is 0, instead of 200 (or anything >> else). The log format is configured like this: >> >> log_format cache '[$time_local] $remote_addr - $request_time - ' >> '$upstream_cache_status ' >> 'Cache-Control: $upstream_http_cache_control ' >> 'Expires: $upstream_http_expires ' >> '"$request" ($status) ' >> '"$http_user_agent" '; >> >> The value 0 occurs for the variable $status >> >> Has anyone any idea, what the status 0 means or why it occurs? >> >> Kind regards >> >> >> Andreas >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From savannah_beckett30 at yahoo.com Tue Feb 7 15:57:57 2012 From: savannah_beckett30 at yahoo.com (Savannah Beckett) Date: Tue, 7 Feb 2012 07:57:57 -0800 (PST) Subject: Re2: Message-ID: <1328630277.7727.yint-ygo-j2me@web121505.mail.ne1.yahoo.com> Hi, ok, no problem! http://www.dekalb911.org/httphome76solutions37752.php?gysCID=14 Tue, 7 Feb 2012 16:57:56 _________________________________ "The girl sat down on the grassy bank, pulled Cora down beside her and in her gentle, kindly way, continued to draw Bill out." (c) Savhonna vpuivm From nginxyz at mail.ru Tue Feb 7 21:49:39 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 08 Feb 2012 01:49:39 +0400 Subject: Autoindex module extension [patch] In-Reply-To: <20120206170024.GM67687@mdounin.ru> References: <20120206170024.GM67687@mdounin.ru> Message-ID: 06 ??????? 2012, 21:00 ?? Maxim Dounin : > On Mon, Feb 06, 2012 at 08:15:59PM +0400, Max wrote: > > I have written an extension for the Autoindex module that adds > > two new directives. I've included the patch against the latest > > 1.1.14 version of nginx, as well as the documentation for the > > Wiki. > > There are no plans to extend autoindex with various > customizations. Instead, we'll likely provide xml index to make > arbitrary customization via xslt possible. Then could you maybe add my extension to the list of 3rd party modules and patches? You can find the patch and the documentation here: NGiNXYZ NGiNX Autoindex Module Extension http://nginxyzpro.berlios.de/nginxyz_nginx_autoindex_module_extension/ Max From nginx-forum at nginx.us Tue Feb 7 22:26:20 2012 From: nginx-forum at nginx.us (Rembo) Date: Tue, 07 Feb 2012 17:26:20 -0500 Subject: How to Autstart Nginx - php-cgi.exe Message-ID: Hey, I'm fairly new to this' and I really need help! My php-cgi.exe off automatically after 10-20 min. Does anyone have the energy and time to look at it? I will pay for it. I really hope someone will help me with this problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222084,222084#msg-222084 From andrew at andrewloe.com Tue Feb 7 23:54:08 2012 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Tue, 7 Feb 2012 15:54:08 -0800 Subject: proxy_buffering off causes truncated responses when backend emits response in small chunks Message-ID: We use nginx as both a load-balancer and webserver. This issue is with the nginx functioning as a load-balancer. We reverse proxy to 6 nginx webservers running a number of Unicorn (Rails) application servers, these webserver nginx instances also run Evan Miller's mod_zip to assemble archives on the fly. We have discovered under certain circumstances the load-balancing nginx will "hang-up" on the webserver if the load-balancer is configured with proxy_buffering off, however proxy_buffering on seems to succeed. We would prefer to run without proxy_buffering to prevent the load-balancer's local storage from being overrun. Our default setup uses nginx 0.7.65 for both the load-balancer and the webserver, however switching to using 1.0.12 as the load-balancer has the same problem. We have experimented with different software doing the load-balancing and it does not exhibit this issue. I've have linked the nginx configuration file we're using on the load balancer, and debug logs for both 0.7.65 and 1.0.12. https://x.onehub.com/transfers/sg32zsar The buffering on log is very long, but it does show success of a 4.8GB response, the other responses always fail at the same point (826 MB). The client sees the following (in access.log): $ curl -b cookie.txt -o US.zip https://mydomain.com/folders/7816672/archive % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 16 4922M 16 826M 0 0 1906k 0 0:44:03 0:07:23 0:36:40 1075k curl: (18) transfer closed with 4294967296 bytes remaining to read The nginx instance serving as the webserver logs: Feb 07 15:03:39 ip-10-2-185-35 error.log: 2012/02/07 23:03:39 [info] 21425#0: *16656299 client closed prematurely connection, so upstream connection is closed too (104: Connection reset by peer) while reading upstream, client: 10.254.174.80, server: mydomain.com, request: "GET /folders/7816672/archive HTTP/1.0", subrequest: "/s3/asset-27235062", upstream: " http://72.21.215.100:80/bucket/asset-27235062?AWSAccessKeyId=key&Expires=1328741778&Signature=signature", host: "mydomain.com" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 8 00:23:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Feb 2012 04:23:58 +0400 Subject: proxy_buffering off causes truncated responses when backend emits response in small chunks In-Reply-To: References: Message-ID: <20120208002358.GV67687@mdounin.ru> Hello! On Tue, Feb 07, 2012 at 03:54:08PM -0800, W. Andrew Loe III wrote: > We use nginx as both a load-balancer and webserver. This issue is with the > nginx functioning as a load-balancer. > > We reverse proxy to 6 nginx webservers running a number of Unicorn (Rails) > application servers, these webserver nginx instances also run Evan Miller's > mod_zip to assemble archives on the fly. We have discovered under certain > circumstances the load-balancing nginx will "hang-up" on the webserver if > the load-balancer is configured with proxy_buffering off, however > proxy_buffering on seems to succeed. We would prefer to run without > proxy_buffering to prevent the load-balancer's local storage from being > overrun. If you want to disable disk buffering you don't need to disable buffering at all. Use proxy_max_temp_file_size 0; instead. The only valid reason to disable buffering completely with "proxy_buffering off" is when you need even single byte from backend to be immediately passed to client, i.e. as in some streaming / long polling cases. > Our default setup uses nginx 0.7.65 for both the load-balancer and the > webserver, however switching to using 1.0.12 as the load-balancer has the > same problem. We have experimented with different software doing the > load-balancing and it does not exhibit this issue. > > I've have linked the nginx configuration file we're using on the load > balancer, and debug logs for both 0.7.65 and 1.0.12. > > https://x.onehub.com/transfers/sg32zsar > > The buffering on log is very long, but it does show success of a 4.8GB > response, the other responses always fail at the same point (826 MB). > > The client sees the following (in access.log): > > $ curl -b cookie.txt -o US.zip https://mydomain.com/folders/7816672/archive > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 16 4922M 16 826M 0 0 1906k 0 0:44:03 0:07:23 0:36:40 > 1075k > curl: (18) transfer closed with 4294967296 bytes remaining to read The response is over 4G, and this won't work with "proxy_buffering off" in 1.0.x on 32bit systems. The non-buffered mode was originally designed for small memcached responses and used to use size_t for length storage. You have to upgrade to 1.1.x where it now uses off_t and will be able to handle large responses even on 32bit platforms. Alternatively, just forget about "proxy_buffering off" as you don't need it anyway, see above. Maxim Dounin From mdounin at mdounin.ru Wed Feb 8 00:31:04 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Feb 2012 04:31:04 +0400 Subject: Autoindex module extension [patch] In-Reply-To: References: <20120206170024.GM67687@mdounin.ru> Message-ID: <20120208003104.GW67687@mdounin.ru> Hello! On Wed, Feb 08, 2012 at 01:49:39AM +0400, Max wrote: > > 06 ??????? 2012, 21:00 ?? Maxim Dounin : > > On Mon, Feb 06, 2012 at 08:15:59PM +0400, Max wrote: > > > I have written an extension for the Autoindex module that adds > > > two new directives. I've included the patch against the latest > > > 1.1.14 version of nginx, as well as the documentation for the > > > Wiki. > > > > There are no plans to extend autoindex with various > > customizations. Instead, we'll likely provide xml index to make > > arbitrary customization via xslt possible. > > Then could you maybe add my extension to the list of 3rd party > modules and patches? > > You can find the patch and the documentation here: > > NGiNXYZ NGiNX Autoindex Module Extension > http://nginxyzpro.berlios.de/nginxyz_nginx_autoindex_module_extension/ The only list of 3rd party modules we have is in wiki, which you may update yourself if you feel it's usable for others. (You may also want to look at fancy index 3rd party module, which is expected to cover most of customization needs as of now. See http://wiki.nginx.org/NgxFancyIndex.) Maxim Dounin From nginx-forum at nginx.us Wed Feb 8 01:03:52 2012 From: nginx-forum at nginx.us (Langley) Date: Tue, 07 Feb 2012 20:03:52 -0500 Subject: HA cookie-based routing in the cloud Message-ID: <042d44f180056fea5e93107388d16caf.NginxMailingListEnglish@forum.nginx.org> We are looking for a solution for "sticky bit routing" based on cookies that will run on Amazon's EC2 cloud. I've looked at the sticky module (although not the source yet) and it ~may~ be capable of doing what we need, but I thought I'd ask the forum to see if anyone else has already tried this solution. (Without knowing the implementation, it's impossible to say if our requirements can be met by the sticky module) The challenge that we have is that unlike a traditional system where the sticky bit routing would be to one of a set of predefined servers, in our case, the servers will be created dynamically in the cloud. We can't "configure" them when we start the nginx routing layer. Although we may have some "back up" servers, that can be used if no cookie is in the request OR if the cookie specifies a server that has died, in general the servers that the cookie will be specifying will be dynamically created and we will assign them to the requests "ourselves" (not needing the nginx layer to round-robin assign them to one of a pool of fixed address servers). So my question may come down to: "Can the sticky module route to servers not predefined in the initial configuration"? I can easily imagine an implementation that could handle this, but wanted to ask if the sticky module already does this. Thanks in advance -- Langley Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222088,222088#msg-222088 From andrew at andrewloe.com Wed Feb 8 01:13:42 2012 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Tue, 7 Feb 2012 17:13:42 -0800 Subject: proxy_buffering off causes truncated responses when backend emits response in small chunks In-Reply-To: <20120208002358.GV67687@mdounin.ru> References: <20120208002358.GV67687@mdounin.ru> Message-ID: Mystery solved. I will just proxy_max_temp_file_size 0, as the intention was to just avoid the disk. Thank you! On Tue, Feb 7, 2012 at 4:23 PM, Maxim Dounin wrote: > Hello! > > On Tue, Feb 07, 2012 at 03:54:08PM -0800, W. Andrew Loe III wrote: > > > We use nginx as both a load-balancer and webserver. This issue is with > the > > nginx functioning as a load-balancer. > > > > We reverse proxy to 6 nginx webservers running a number of Unicorn > (Rails) > > application servers, these webserver nginx instances also run Evan > Miller's > > mod_zip to assemble archives on the fly. We have discovered under certain > > circumstances the load-balancing nginx will "hang-up" on the webserver if > > the load-balancer is configured with proxy_buffering off, however > > proxy_buffering on seems to succeed. We would prefer to run without > > proxy_buffering to prevent the load-balancer's local storage from being > > overrun. > > If you want to disable disk buffering you don't need to disable > buffering at all. Use > > proxy_max_temp_file_size 0; > > instead. > > The only valid reason to disable buffering completely with > "proxy_buffering off" is when you need even single byte from > backend to be immediately passed to client, i.e. as in some > streaming / long polling cases. > > > Our default setup uses nginx 0.7.65 for both the load-balancer and the > > webserver, however switching to using 1.0.12 as the load-balancer has the > > same problem. We have experimented with different software doing the > > load-balancing and it does not exhibit this issue. > > > > I've have linked the nginx configuration file we're using on the load > > balancer, and debug logs for both 0.7.65 and 1.0.12. > > > > https://x.onehub.com/transfers/sg32zsar > > > > The buffering on log is very long, but it does show success of a 4.8GB > > response, the other responses always fail at the same point (826 MB). > > > > The client sees the following (in access.log): > > > > $ curl -b cookie.txt -o US.zip > https://mydomain.com/folders/7816672/archive > > % Total % Received % Xferd Average Speed Time Time Time > > Current > > Dload Upload Total Spent Left > > Speed > > 16 4922M 16 826M 0 0 1906k 0 0:44:03 0:07:23 0:36:40 > > 1075k > > curl: (18) transfer closed with 4294967296 bytes remaining to read > > The response is over 4G, and this won't work with "proxy_buffering > off" in 1.0.x on 32bit systems. The non-buffered mode was > originally designed for small memcached responses and used to use > size_t for length storage. > > You have to upgrade to 1.1.x where it now uses off_t and will be > able to handle large responses even on 32bit platforms. > Alternatively, just forget about "proxy_buffering off" as you > don't need it anyway, see above. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 8 01:44:19 2012 From: nginx-forum at nginx.us (tyanhly) Date: Tue, 07 Feb 2012 20:44:19 -0500 Subject: Remove query string on rewrite In-Reply-To: <64b56cd03f477f04e4b3ed9ae4dead76.NginxMailingListEnglish@forum.nginx.org> References: <64b56cd03f477f04e4b3ed9ae4dead76.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6342302781bb2881ab64260c91af46f4.NginxMailingListEnglish@forum.nginx.org> Thank Mikhail Mazursky, I try now. tyanhly Posted at Nginx Forum: http://forum.nginx.org/read.php?2,118933,222091#msg-222091 From nginx-forum at nginx.us Wed Feb 8 01:54:39 2012 From: nginx-forum at nginx.us (tyanhly) Date: Tue, 07 Feb 2012 20:54:39 -0500 Subject: Remove query string on rewrite In-Reply-To: <6342302781bb2881ab64260c91af46f4.NginxMailingListEnglish@forum.nginx.org> References: <64b56cd03f477f04e4b3ed9ae4dead76.NginxMailingListEnglish@forum.nginx.org> <6342302781bb2881ab64260c91af46f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <59aa93b4de6b088b700bb7d2770d99a7.NginxMailingListEnglish@forum.nginx.org> Thank Maxim Dounin for take care, I noted, and try now tyanhly Posted at Nginx Forum: http://forum.nginx.org/read.php?2,118933,222092#msg-222092 From quintinpar at gmail.com Wed Feb 8 05:43:51 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 11:13:51 +0530 Subject: How do I cache a page for logged-in user? Message-ID: Hi all, I maintain sessions with cookie name as "company_sessionid" having the session info. Now if I need to cache for that user or per user, how should I construct my proxy_cache_key? -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Wed Feb 8 05:49:34 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 08 Feb 2012 09:49:34 +0400 Subject: Autoindex module extension [patch] In-Reply-To: <20120208003104.GW67687@mdounin.ru> References: <20120208003104.GW67687@mdounin.ru> Message-ID: 08 ??????? 2012, 04:31 ?? Maxim Dounin : > On Wed, Feb 08, 2012 at 01:49:39AM +0400, Max wrote: > > 06 ??????? 2012, 21:00 ?? Maxim Dounin : > > > On Mon, Feb 06, 2012 at 08:15:59PM +0400, Max wrote: > > > > I have written an extension for the Autoindex module that adds > > > > two new directives. I've included the patch against the latest > > > > 1.1.14 version of nginx, as well as the documentation for the > > > > Wiki. > > > > > > There are no plans to extend autoindex with various > > > customizations. Instead, we'll likely provide xml index to make > > > arbitrary customization via xslt possible. > > > > Then could you maybe add my extension to the list of 3rd party > > modules and patches? > > > > You can find the patch and the documentation here: > > > > NGiNXYZ NGiNX Autoindex Module Extension > > http://nginxyzpro.berlios.de/nginxyz_nginx_autoindex_module_extension/ > > The only list of 3rd party modules we have is in wiki, which you > may update yourself if you feel it's usable for others. I have updated the wiki to include my extension: http://wiki.nginx.org/3rdPartyModules#Third_party_patches > (You may also want to look at fancy index 3rd party module, which is > expected to cover most of customization needs as of now. See > http://wiki.nginx.org/NgxFancyIndex.) I'm familiar with that module, but it doesn't provide the features my extension provides, which is why I wrote my extension. While the fancy_index_header directive could be used to include a customized header file to omit the "Index of" string in the title and in the header (which is what my autoindex_omit_index_of directive does), there is no way to customize the displayed directory name with the fancy index module, it always displays the URI, unlike my autoindex_dirname_to_display directive, which can be set to display anything, including variables, instead of the standard URI-based "Index of /directory/subdirectory/" string. Max From nginxyz at mail.ru Wed Feb 8 06:06:03 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 08 Feb 2012 10:06:03 +0400 Subject: How do I cache a page for logged-in user? In-Reply-To: References: Message-ID: 08 ??????? 2012, 09:44 ?? Quintin Par : > I maintain sessions with cookie name as "company_sessionid" having the > session info. > > Now if I need to cache for that user or per user, how should I construct my > proxy_cache_key? proxy_cache_key "$scheme$host$request_uri$cookie_company_sessionid"; Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Wed Feb 8 06:38:51 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 12:08:51 +0530 Subject: How do I cache a page for logged-in user? In-Reply-To: References: Message-ID: Thanks a lot! -Cherian On Wed, Feb 8, 2012 at 11:36 AM, Max wrote: > 08 ??????? 2012, 09:44 ?? Quintin Par **: > I maintain sessions with > cookie name as "company_sessionid" having the > session info. > > Now if I > need to cache for that user or per user, how should I construct my > > proxy_cache_key? proxy_cache_key > "$scheme$host$request_uri$cookie_company_sessionid"; Max > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 8 06:45:43 2012 From: nginx-forum at nginx.us (zealot83) Date: Wed, 08 Feb 2012 01:45:43 -0500 Subject: Intermittent "504 SSL_do_handshake() failed" In-Reply-To: <59333ecc9df2dafd2f309856a7ad8566.NginxMailingListEnglish@forum.nginx.org> References: <20101024183103.GB19195@rambler-co.ru> <59333ecc9df2dafd2f309856a7ad8566.NginxMailingListEnglish@forum.nginx.org> Message-ID: A similar problem occurred in my case. Following is the ssl server configuration. At first I used AJP. But after I could not find a corresponding directive to proxy_ssl_session_reuse, I changed to proxy. upstream loadbalancer { server 127.0.0.1:8080; keepalive 100; } server { listen 443 default ssl; ssl on; ...... location / { #access_log off; #ajp_pass loadbalancer; proxy_pass http://loadbalancer; proxy_ssl_session_reuse off; } } Here's the error log: 2012/02/08 15:03:49 [info] 13273#0: *1 SSL_do_handshake() failed (SSL: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown) while SSL handshaking, Any help would be greatly appreciated! Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,144108,222098#msg-222098 From quintinpar at gmail.com Wed Feb 8 07:44:04 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 13:14:04 +0530 Subject: Invalidate just one page under a location directive. Message-ID: Hi all, A section of my virtual(say /industry) is cached with proxy_cache_key $scheme$host$request_uri; key. This will cache all pages under this virtual. I do cache invalidation by firing a request with proxy_cache_bypass Now if I need to invalidate cache for a page under this virtual, how should I go about doing it? Say /industry is the location directive I need to invalidate just /industry/category/cars How do I do this? -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Feb 8 09:06:22 2012 From: ne at vbart.ru (=?utf-8?b?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Wed, 8 Feb 2012 13:06:22 +0400 Subject: Invalidate just one page under a location directive. In-Reply-To: References: Message-ID: <201202081306.22590.ne@vbart.ru> On Wednesday 08 February 2012 11:44:04 Quintin Par wrote: > Hi all, > > A section of my virtual(say /industry) is cached with > > proxy_cache_key $scheme$host$request_uri; > > key. This will cache all pages under this virtual. I do cache invalidation > by firing a request with > > proxy_cache_bypass > > Now if I need to invalidate cache for a page under this virtual, how should > I go about doing it? > > Say /industry is the location directive > > I need to invalidate just > > /industry/category/cars > > How do I do this? > The best way would be to create a location for this particular page: location /industry/category/cars { ... } http://nginx.org/en/docs/http/ngx_http_core_module.html#location also, you can utilize the map directive functionality, e.g: map $uri $is_cached_uri { default 1; /industry/category/cars 0; } proxy_cache_bypass $is_cached_uri; http://wiki.nginx.org/HttpMapModule wbr, Valentin V. Bartenev From quintinpar at gmail.com Wed Feb 8 09:36:31 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 15:06:31 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: <201202081306.22590.ne@vbart.ru> References: <201202081306.22590.ne@vbart.ru> Message-ID: Thanks would be difficult because my intention is to maintain per page cache. So we are talking about 1000+ pages and more being created dynamically. -Quintin 2012/2/8 ???????? ???????? > On Wednesday 08 February 2012 11:44:04 Quintin Par wrote: > > Hi all, > > > > A section of my virtual(say /industry) is cached with > > > > proxy_cache_key $scheme$host$request_uri; > > > > key. This will cache all pages under this virtual. I do cache > invalidation > > by firing a request with > > > > proxy_cache_bypass > > > > Now if I need to invalidate cache for a page under this virtual, how > should > > I go about doing it? > > > > Say /industry is the location directive > > > > I need to invalidate just > > > > /industry/category/cars > > > > How do I do this? > > > > The best way would be to create a location for this particular page: > > location /industry/category/cars { > ... > } > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > > also, you can utilize the map directive functionality, e.g: > > map $uri $is_cached_uri { > default 1; > /industry/category/cars 0; > } > > proxy_cache_bypass $is_cached_uri; > > http://wiki.nginx.org/HttpMapModule > > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Feb 8 10:43:10 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 8 Feb 2012 14:43:10 +0400 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: <201202081443.10136.ne@vbart.ru> On Wednesday 08 February 2012 13:36:31 Quintin Par wrote: > Thanks would be difficult because my intention is to maintain per page > cache. So we are talking about 1000+ pages and more being created > dynamically. > Probably, I misunderstood. My solution switches cache off completely for particular uri. Maybe this module is useful for you: http://labs.frickle.com/nginx_ngx_cache_purge/ wbr, Valentin V. Bartenev From quintinpar at gmail.com Wed Feb 8 11:08:31 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 16:38:31 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: <201202081443.10136.ne@vbart.ru> References: <201202081306.22590.ne@vbart.ru> <201202081443.10136.ne@vbart.ru> Message-ID: I was hoping for a solution from the vanilla builds On Wed, Feb 8, 2012 at 4:13 PM, Valentin V. Bartenev wrote: > On Wednesday 08 February 2012 13:36:31 Quintin Par wrote: > > Thanks would be difficult because my intention is to maintain per page > > cache. So we are talking about 1000+ pages and more being created > > dynamically. > > > > Probably, I misunderstood. My solution switches cache off completely for > particular uri. > > Maybe this module is useful for you: > http://labs.frickle.com/nginx_ngx_cache_purge/ > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From belloni at imavis.com Wed Feb 8 11:30:10 2012 From: belloni at imavis.com (Cristiano Belloni) Date: Wed, 08 Feb 2012 12:30:10 +0100 Subject: Beginner developer question: streaming mjpeg in nginx Message-ID: <4F325CC2.4070608@imavis.com> Hi to all, I'd like to stream live, continuous Motion JPEG throught nginx web server. Right now, I have a little program that captures the frames from my hardware, separates them with boundaries and send them to stdout. Here's, schematically, what my program does: // As soon as someone connects fprintf (stdout, "Connection: close"\ "\r\n" \ "Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0" \ "\r\n" \ "Pragma: no-cache" \ "\r\n" \ "Content-Type: multipart/x-mixed-replace;boundary=" BOUNDARY \ "\r\n" \ "\r\n" \ "--" BOUNDARY "\r\n"); [...] while (1) { //Magically take the frame from the shared memory fprintf(stdout, "Content-Type: image/jpeg\r\n" \ "Content-Length: %d\r\n" \ "\r\n", frame_length); DBG("Sending MJPEG frame\n"); if ((fwrite(frame_data, frame_length, sizeof (char) , stdout)) == 0 ) { ERR ("Error sending frame"); break; } DBG("Sending boundary\n"); fprintf(stdout, "\r\n--" BOUNDARY "\r\n"); } I've seen examples of nginx modules that stream h264 files, but this case is quite different: live streaming means a potentially infinite MJPEG stream and no buffering done by the webserver. I'm an nginx beginner, so I'm asking for some directions. How can I stream this (potentially infinite) series of frames? A CGI is sufficient o will I need to write a module? Is it even possible to do something like this with nginx? Thanks, Cristiano. -- Belloni Cristiano Imavis Srl. www.imavis.com belloni at imavis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Wed Feb 8 12:02:16 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 08 Feb 2012 16:02:16 +0400 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: 08 ??????? 2012, 13:37 ?? Quintin Par : > Thanks would be difficult because my intention is to maintain per page > cache. So we are talking about 1000+ pages and more being created > dynamically. > > -Quintin > > 2012/2/8 ???????? ???????? > > > On Wednesday 08 February 2012 11:44:04 Quintin Par wrote: > > > Hi all, > > > > > > A section of my virtual(say /industry) is cached with > > > > > > proxy_cache_key $scheme$host$request_uri; > > > > > > key. This will cache all pages under this virtual. I do cache > > invalidation > > > by firing a request with > > > > > > proxy_cache_bypass > > > > > > Now if I need to invalidate cache for a page under this virtual, how > > should > > > I go about doing it? > > > > > > Say /industry is the location directive > > > > > > I need to invalidate just > > > > > > /industry/category/cars > > > > > > How do I do this? Do you want to do per-request cache invalidation based on client information (header values, cookies, originating IP etc.), so the cache is bypassed whenever a client sends a request that matches your cache invalidation criteria, or do you want to be able to invalidate a certain URI for all future clients and requests by sending some kind of "switch caching off for this URI" request? Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Wed Feb 8 12:19:04 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 17:49:04 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: Simples way to say this would be, invalidate proxy_cache_key but not all of the cache. On Wed, Feb 8, 2012 at 5:32 PM, Max wrote: > 08 ??????? 2012, 13:37 ?? Quintin Par **: > Thanks would be difficult > because my intention is to maintain per page > cache. So we are talking > about 1000+ pages and more being created > dynamically. > > -Quintin > > > 2012/2/8 ???????? ???????? ** > > > On Wednesday 08 February 2012 > 11:44:04 Quintin Par wrote: > > > Hi all, > > > > > > A section of my > virtual(say /industry) is cached with > > > > > > proxy_cache_key > $scheme$host$request_uri; > > > > > > key. This will cache all pages under > this virtual. I do cache > > invalidation > > > by firing a request with > > > > > > > proxy_cache_bypass > > > > > > Now if I need to invalidate cache > for a page under this virtual, how > > should > > > I go about doing it? > > > > > > > Say /industry is the location directive > > > > > > I need to > invalidate just > > > > > > /industry/category/cars > > > > > > How do I do > this? Do you want to do per-request cache invalidation based on client > information (header values, cookies, originating IP etc.), so the cache is > bypassed whenever a client sends a request that matches your cache > invalidation criteria, or do you want to be able to invalidate a certain > URI for all future clients and requests by sending some kind of "switch > caching off for this URI" request? Max > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at mxcrypt.com Wed Feb 8 12:49:29 2012 From: max at mxcrypt.com (Maxim Khitrov) Date: Wed, 8 Feb 2012 07:49:29 -0500 Subject: Is sendfile_max_chunk needed when using aio on FreeBSD 9? Message-ID: Hi all, Following Igor's advice [1], I'm using the following configuration for file handling on FreeBSD 9.0 amd64: sendfile on; aio sendfile; tcp_nopush on; read_ahead 256K; My understanding of this setup is that sendfile, which is a synchronous operation, is restricted to sending only the bytes that are already buffered in memory. Once the data has to be read from disk, sendfile returns and nginx issues a 1-byte aio_read operation to buffer an additional 256 KB of data. The question is whether it is beneficial to use sendfile_max_chunk option is this configuration as well? Since sendfile is guaranteed to return as soon as it runs out of buffered data, is there any real advantage to further restricting how much it can send in a single call? By the way, is tcp_nopush needed here just to make sure that the first packet, which contains headers, doesn't get sent out without any data in it? I think this would also prevent transmission of partial packets when sendfile runs out of data to send and nginx has to wait for the aio_read to finish. Wouldn't it be better in this case to send the packet without waiting for disk I/O? - Max [1] http://mailman.nginx.org/pipermail/nginx/2011-May/026651.html From quintinpar at gmail.com Wed Feb 8 12:57:57 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 18:27:57 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: I don?t think I made that clear. To put it simply, Invalidate cache for a particular key entry but not all of the cache e.g for location = /vehicles { proxy_cache_key $request_uri; } Hashtable for cache: [ hittp://jj.com/vehicles/cars => hittp://jj.com/vehicles/cars hittp://jj.com/vehicles/bus => hittp://jj.com/vehicles/bus hittp://jj.com/vehicles/bike=> hittp://jj.com/vehicles/bike ] I should be able to invalidate just cars because my page for cars have changed but not others. Hope you got the point. Quintin On Wed, Feb 8, 2012 at 5:49 PM, Quintin Par wrote: > Simples way to say this would be, invalidate proxy_cache_key but not all > of the cache. > > On Wed, Feb 8, 2012 at 5:32 PM, Max wrote: > >> 08 ??????? 2012, 13:37 ?? Quintin Par **: > Thanks would be difficult >> because my intention is to maintain per page > cache. So we are talking >> about 1000+ pages and more being created > dynamically. > > -Quintin > > >> 2012/2/8 ???????? ???????? ** > > > On Wednesday 08 February 2012 >> 11:44:04 Quintin Par wrote: > > > Hi all, > > > > > > A section of my >> virtual(say /industry) is cached with > > > > > > proxy_cache_key >> $scheme$host$request_uri; > > > > > > key. This will cache all pages under >> this virtual. I do cache > > invalidation > > > by firing a request with > >> > > > > > proxy_cache_bypass > > > > > > Now if I need to invalidate cache >> for a page under this virtual, how > > should > > > I go about doing it? > >> > > > > > Say /industry is the location directive > > > > > > I need to >> invalidate just > > > > > > /industry/category/cars > > > > > > How do I do >> this? Do you want to do per-request cache invalidation based on client >> information (header values, cookies, originating IP etc.), so the cache is >> bypassed whenever a client sends a request that matches your cache >> invalidation criteria, or do you want to be able to invalidate a certain >> URI for all future clients and requests by sending some kind of "switch >> caching off for this URI" request? Max >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 8 13:46:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Feb 2012 17:46:18 +0400 Subject: Is sendfile_max_chunk needed when using aio on FreeBSD 9? In-Reply-To: References: Message-ID: <20120208134618.GY67687@mdounin.ru> Hello! On Wed, Feb 08, 2012 at 07:49:29AM -0500, Maxim Khitrov wrote: > Hi all, > > Following Igor's advice [1], I'm using the following configuration for > file handling on FreeBSD 9.0 amd64: > > sendfile on; > aio sendfile; > tcp_nopush on; > read_ahead 256K; > > My understanding of this setup is that sendfile, which is a > synchronous operation, is restricted to sending only the bytes that > are already buffered in memory. Once the data has to be read from > disk, sendfile returns and nginx issues a 1-byte aio_read operation to > buffer an additional 256 KB of data. > > The question is whether it is beneficial to use sendfile_max_chunk > option is this configuration as well? Since sendfile is guaranteed to > return as soon as it runs out of buffered data, is there any real > advantage to further restricting how much it can send in a single > call? It may make sense as in exreame conditions (i.e. if sendfile(NODISKIO) fails to send anything right after aio preread) nginx will fallback to normal sendfile() (without SF_NODISKIO). On the other hand, if the above happens - it means you have problem anyway. > By the way, is tcp_nopush needed here just to make sure that the first > packet, which contains headers, doesn't get sent out without any data > in it? I think this would also prevent transmission of partial packets > when sendfile runs out of data to send and nginx has to wait for the > aio_read to finish. Wouldn't it be better in this case to send the > packet without waiting for disk I/O? The tcp_nopush is expected to prevent transmission of incomplete packets. I see no problem here. Maxim Dounin From max at mxcrypt.com Wed Feb 8 14:48:24 2012 From: max at mxcrypt.com (Maxim Khitrov) Date: Wed, 8 Feb 2012 09:48:24 -0500 Subject: Is sendfile_max_chunk needed when using aio on FreeBSD 9? In-Reply-To: <20120208134618.GY67687@mdounin.ru> References: <20120208134618.GY67687@mdounin.ru> Message-ID: On Wed, Feb 8, 2012 at 8:46 AM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 08, 2012 at 07:49:29AM -0500, Maxim Khitrov wrote: > >> Hi all, >> >> Following Igor's advice [1], I'm using the following configuration for >> file handling on FreeBSD 9.0 amd64: >> >> sendfile on; >> aio sendfile; >> tcp_nopush on; >> read_ahead 256K; >> >> My understanding of this setup is that sendfile, which is a >> synchronous operation, is restricted to sending only the bytes that >> are already buffered in memory. Once the data has to be read from >> disk, sendfile returns and nginx issues a 1-byte aio_read operation to >> buffer an additional 256 KB of data. >> >> The question is whether it is beneficial to use sendfile_max_chunk >> option is this configuration as well? Since sendfile is guaranteed to >> return as soon as it runs out of buffered data, is there any real >> advantage to further restricting how much it can send in a single >> call? > > It may make sense as in exreame conditions (i.e. if > sendfile(NODISKIO) fails to send anything right after aio preread) > nginx will fallback to normal sendfile() (without SF_NODISKIO). > On the other hand, if the above happens - it means you have > problem anyway. I see. So there shouldn't be any harm in specifying something like 'sendfile_max_chunk 512K', since that limit would almost never come into play. Would I see anything in the log files when the fallback to the normal sendfile happens? >> By the way, is tcp_nopush needed here just to make sure that the first >> packet, which contains headers, doesn't get sent out without any data >> in it? I think this would also prevent transmission of partial packets >> when sendfile runs out of data to send and nginx has to wait for the >> aio_read to finish. Wouldn't it be better in this case to send the >> packet without waiting for disk I/O? > > The tcp_nopush is expected to prevent transmission of incomplete > packets. ?I see no problem here. The way I was thinking about it is that if the system has a packet that is half-full from the last sendfile call, and is now going to spend some number of ms buffering the next chunk, then for interactivity and throughput reasons it may be better to send the packet now. It would consume a bit more bandwidth for the TCP/IP headers, but as long as the entire packet is sent before the aio_read call is finished, you win on the throughput front. This might be completely insignificant, I'm not sure. - Max From mdounin at mdounin.ru Wed Feb 8 15:33:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Feb 2012 19:33:41 +0400 Subject: Is sendfile_max_chunk needed when using aio on FreeBSD 9? In-Reply-To: References: <20120208134618.GY67687@mdounin.ru> Message-ID: <20120208153340.GA67687@mdounin.ru> Hello! On Wed, Feb 08, 2012 at 09:48:24AM -0500, Maxim Khitrov wrote: > On Wed, Feb 8, 2012 at 8:46 AM, Maxim Dounin wrote: > > Hello! > > > > On Wed, Feb 08, 2012 at 07:49:29AM -0500, Maxim Khitrov wrote: > > > >> Hi all, > >> > >> Following Igor's advice [1], I'm using the following configuration for > >> file handling on FreeBSD 9.0 amd64: > >> > >> sendfile on; > >> aio sendfile; > >> tcp_nopush on; > >> read_ahead 256K; > >> > >> My understanding of this setup is that sendfile, which is a > >> synchronous operation, is restricted to sending only the bytes that > >> are already buffered in memory. Once the data has to be read from > >> disk, sendfile returns and nginx issues a 1-byte aio_read operation to > >> buffer an additional 256 KB of data. > >> > >> The question is whether it is beneficial to use sendfile_max_chunk > >> option is this configuration as well? Since sendfile is guaranteed to > >> return as soon as it runs out of buffered data, is there any real > >> advantage to further restricting how much it can send in a single > >> call? > > > > It may make sense as in exreame conditions (i.e. if > > sendfile(NODISKIO) fails to send anything right after aio preread) > > nginx will fallback to normal sendfile() (without SF_NODISKIO). > > On the other hand, if the above happens - it means you have > > problem anyway. > > I see. So there shouldn't be any harm in specifying something like > 'sendfile_max_chunk 512K', since that limit would almost never come > into play. > > Would I see anything in the log files when the fallback to the normal > sendfile happens? There will be alert about "sendfile(...) returned busy again". > >> By the way, is tcp_nopush needed here just to make sure that the first > >> packet, which contains headers, doesn't get sent out without any data > >> in it? I think this would also prevent transmission of partial packets > >> when sendfile runs out of data to send and nginx has to wait for the > >> aio_read to finish. Wouldn't it be better in this case to send the > >> packet without waiting for disk I/O? > > > > The tcp_nopush is expected to prevent transmission of incomplete > > packets. ?I see no problem here. > > The way I was thinking about it is that if the system has a packet > that is half-full from the last sendfile call, and is now going to > spend some number of ms buffering the next chunk, then for > interactivity and throughput reasons it may be better to send the > packet now. > > It would consume a bit more bandwidth for the TCP/IP headers, but as > long as the entire packet is sent before the aio_read call is > finished, you win on the throughput front. This might be completely > insignificant, I'm not sure. If you don't care about pps, you'll probably won't set tcp_nopush at all. Maxim Dounin From quintinpar at gmail.com Wed Feb 8 15:59:21 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 21:29:21 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: Can someone help? My small site on a tiny VM is being bombarded and the cache invalidation is clearing up everything. Which means warming up the caching takes a lot of expensive queries. -Quintin On Wed, Feb 8, 2012 at 6:27 PM, Quintin Par wrote: > I don?t think I made that clear. > > To put it simply, Invalidate cache for a particular key entry but not all > of the cache > > e.g for > > location = /vehicles { > > proxy_cache_key $request_uri; > > } > > Hashtable for cache: > > [ > > hittp://jj.com/vehicles/cars => hittp://jj.com/vehicles/cars > > hittp://jj.com/vehicles/bus => hittp://jj.com/vehicles/bus > > hittp://jj.com/vehicles/bike=> hittp://jj.com/vehicles/bike > > ] > > I should be able to invalidate just cars because my page for cars have > changed but not others. > > Hope you got the point. > > Quintin > > > > On Wed, Feb 8, 2012 at 5:49 PM, Quintin Par wrote: > >> Simples way to say this would be, invalidate proxy_cache_key but not all >> of the cache. >> >> On Wed, Feb 8, 2012 at 5:32 PM, Max wrote: >> >>> 08 ??????? 2012, 13:37 ?? Quintin Par **: > Thanks would be difficult >>> because my intention is to maintain per page > cache. So we are talking >>> about 1000+ pages and more being created > dynamically. > > -Quintin > > >>> 2012/2/8 ???????? ???????? ** > > > On Wednesday 08 February 2012 >>> 11:44:04 Quintin Par wrote: > > > Hi all, > > > > > > A section of my >>> virtual(say /industry) is cached with > > > > > > proxy_cache_key >>> $scheme$host$request_uri; > > > > > > key. This will cache all pages under >>> this virtual. I do cache > > invalidation > > > by firing a request with > >>> > > > > > proxy_cache_bypass > > > > > > Now if I need to invalidate cache >>> for a page under this virtual, how > > should > > > I go about doing it? > >>> > > > > > Say /industry is the location directive > > > > > > I need to >>> invalidate just > > > > > > /industry/category/cars > > > > > > How do I do >>> this? Do you want to do per-request cache invalidation based on client >>> information (header values, cookies, originating IP etc.), so the cache is >>> bypassed whenever a client sends a request that matches your cache >>> invalidation criteria, or do you want to be able to invalidate a certain >>> URI for all future clients and requests by sending some kind of "switch >>> caching off for this URI" request? Max >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 8 16:15:02 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 08 Feb 2012 16:15:02 +0000 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> Message-ID: <871uq5ti7t.wl%appa@perusio.net> On 8 Fev 2012 15h59 WET, quintinpar at gmail.com wrote: > Can someone help? My small site on a tiny VM is being bombarded and > the cache invalidation is clearing up everything. Which means > warming up the caching takes a lot of expensive queries. What cache invalidation? What caching strategy are you using? Are working with a content based expiration logic or from a microcaching perspective? You can inspect the cache using a small shell script: https://github.com/perusio/nginx-cache-inspector --- appa From quintinpar at gmail.com Wed Feb 8 16:22:43 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 8 Feb 2012 21:52:43 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: <871uq5ti7t.wl%appa@perusio.net> References: <201202081306.22590.ne@vbart.ru> <871uq5ti7t.wl%appa@perusio.net> Message-ID: Invalidate a cache page using the bypass mechanism proxy_cache_bypass, not expiration To put it simply, Invalidate cache for a particular key entry but not all of the cache e.g for location = /vehicles { proxy_cache_key $request_uri; } Hashtable for cache: [ http://jj.com/vehicles/cars => cached page for cars http://jj.com/vehicles/bus => cached page for bus http://jj.com/vehicles/bike=> cached page for bike + 10,000 key, values ] I should be able to invalidate just "cars" because my page for cars have changed but not others. Say a person added a comment in the cars page. Another analogy is to invalidate cache for a single blog post when all the blog posts have been cached under a single location directive. -Quintin On Wed, Feb 8, 2012 at 9:45 PM, Ant?nio P. P. Almeida wrote: > On 8 Fev 2012 15h59 WET, quintinpar at gmail.com wrote: > > > Can someone help? My small site on a tiny VM is being bombarded and > > the cache invalidation is clearing up everything. Which means > > warming up the caching takes a lot of expensive queries. > > What cache invalidation? What caching strategy are you using? > Are working with a content based expiration logic or from a > microcaching perspective? > > You can inspect the cache using a small shell script: > > https://github.com/perusio/nginx-cache-inspector > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 8 17:03:12 2012 From: nginx-forum at nginx.us (Langley) Date: Wed, 08 Feb 2012 12:03:12 -0500 Subject: HA cookie-based routing in the cloud In-Reply-To: <042d44f180056fea5e93107388d16caf.NginxMailingListEnglish@forum.nginx.org> References: <042d44f180056fea5e93107388d16caf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <73b3ad009f2ed800947c97c91d47bb85.NginxMailingListEnglish@forum.nginx.org> Is this the right forum to ask questions about the nginx-sticky-module? I checked here: http://code.google.com/p/nginx-sticky-module/ and didn't see any mailing list, forum or google group. TIA -- Langley Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222088,222120#msg-222120 From appa at perusio.net Wed Feb 8 17:10:34 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 08 Feb 2012 17:10:34 +0000 Subject: Invalidate just one page under a location directive. In-Reply-To: References: <201202081306.22590.ne@vbart.ru> <871uq5ti7t.wl%appa@perusio.net> Message-ID: <87zkcts12t.wl%appa@perusio.net> On 8 Fev 2012 16h22 WET, quintinpar at gmail.com wrote: > Invalidate a cache page using the bypass mechanism > proxy_cache_bypass, not expiration IMHO the best option would be from the application side send a 'X-Accel-Expires: 0' header. But this needs to be done at the *app level*. Unless you place something in the headers that the application returns Nginx there's no way of Nginx to know what is going on. > To put it simply, Invalidate cache for a particular key entry but > not all of the cache > e.g for > > location = /vehicles { > > proxy_cache_key $request_uri; > > } > > Hashtable for cache: > > [ > > http://jj.com/vehicles/cars => cached page for cars > > http://jj.com/vehicles/bus => cached page for bus > > http://jj.com/vehicles/bike=> cached page for bike > + 10,000 key, values > ] > I should be able to invalidate just "cars" because my page for cars > have changed but not others. Say a person added a comment in the > cars page. Another analogy is to invalidate cache for a single blog > post when all the blog posts have been cached under a single > location directive. -Quintin Send a 'X-Accel-Expires: 0' header from the application at: http://jj.com/vehicles/cars --- appa From reallfqq-nginx at yahoo.fr Thu Feb 9 00:02:33 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Feb 2012 19:02:33 -0500 Subject: Unique server for all domains or a server per domain? Message-ID: Hello, I am currently using a unique server conf for all my domains. When I wanna restrain certain activity to certain domains (subdirectories, URL rewriting, etc.) I do not have other choice than using 'if' on the $host variable leading to some complications due to the unreliable nature of the 'if' clause. The directory from which the content is served is determined by the hostname. On the other side, is using several servers to listen on several domains the best solution? Since NGinx is event-based and not client-based maybe that's not a problem anymore... But not so long ago I was stuck with Apache. I still need to get used to that (great!) change. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Thu Feb 9 00:58:37 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 09 Feb 2012 04:58:37 +0400 Subject: Unique server for all domains or a server per domain? In-Reply-To: References: Message-ID: 09 ??????? 2012, 04:03 ?? "B.R." : > I am currently using a unique server conf for all my domains. > When I wanna restrain certain activity to certain domains (subdirectories, > URL rewriting, etc.) I do not have other choice than using 'if' on the > $host variable leading to some complications due to the unreliable nature > of the 'if' clause. > The directory from which the content is served is determined by the > hostname. > > On the other side, is using several servers to listen on several domains > the best solution? Using separate per-domain server configuration blocks is both more efficient and easier to configure and maintain. Using a single server configuration block for many domains requires many "if" blocks, which are computationally intensive to evaluate, so you should avoid using them whenever possible. Most of the time you'll be using "if" blocks for rewrites, so it's better to just use separate server configuration blocks with direct rewrites. Max From reallfqq-nginx at yahoo.fr Thu Feb 9 01:00:29 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Feb 2012 20:00:29 -0500 Subject: Unique server for all domains or a server per domain? In-Reply-To: References: Message-ID: OK, received loud and clear! Many thanks, --- *B. R.* On Wed, Feb 8, 2012 at 19:58, Max wrote: > > 09 ??????? 2012, 04:03 ?? "B.R." : > > > I am currently using a unique server conf for all my domains. > > When I wanna restrain certain activity to certain domains > (subdirectories, > > URL rewriting, etc.) I do not have other choice than using 'if' on the > > $host variable leading to some complications due to the unreliable nature > > of the 'if' clause. > > The directory from which the content is served is determined by the > > hostname. > > > > On the other side, is using several servers to listen on several domains > > the best solution? > > Using separate per-domain server configuration blocks is both > more efficient and easier to configure and maintain. Using a > single server configuration block for many domains requires > many "if" blocks, which are computationally intensive to > evaluate, so you should avoid using them whenever possible. > Most of the time you'll be using "if" blocks for rewrites, > so it's better to just use separate server configuration > blocks with direct rewrites. > > Max > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 9 01:06:11 2012 From: nginx-forum at nginx.us (zealot83) Date: Wed, 08 Feb 2012 20:06:11 -0500 Subject: 504 SSL_do_handshake() failed Message-ID: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> A similar problem to below case occurred in mine. http://forum.nginx.org/read.php?2,144108,222098#msg-222098 Following is the ssl server configuration. At first I used AJP. But after I could not find a corresponding directive to proxy_ssl_session_reuse, I changed to proxy. upstream loadbalancer { server 127.0.0.1:8080; keepalive 100; } server { listen 443 default ssl; ssl on; ...... location / { #access_log off; #ajp_pass loadbalancer; proxy_pass http://loadbalancer; proxy_ssl_session_reuse off; } } Here's the error log: 2012/02/08 15:03:49 [info] 13273#0: *1 SSL_do_handshake() failed (SSL: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown) while SSL handshaking, Any help would be greatly appreciated! Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222128,222128#msg-222128 From nginxyz at mail.ru Thu Feb 9 01:48:11 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 09 Feb 2012 05:48:11 +0400 Subject: 504 SSL_do_handshake() failed In-Reply-To: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> References: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> Message-ID: 09 ??????? 2012, 05:06 ?? "zealot83" : > A similar problem to below case occurred in mine. > http://forum.nginx.org/read.php?2,144108,222098#msg-222098 > > Following is the ssl server configuration. > At first I used AJP. > But after I could not find a corresponding directive to > proxy_ssl_session_reuse, I changed to proxy. > > upstream loadbalancer { > server 127.0.0.1:8080; > > keepalive 100; > } > > server { > listen 443 default ssl; > ssl on; > > ...... > > location / { > #access_log off; > #ajp_pass loadbalancer; > proxy_pass http://loadbalancer; > proxy_ssl_session_reuse off; > } > } > > Here's the error log: > 2012/02/08 15:03:49 [info] 13273#0: *1 SSL_do_handshake() failed (SSL: > error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate > unknown) while SSL handshaking, You're probably using a self-signed certificate? First sync the time on all the servers and clients using NTP, then try using different browsers (other than Firefox and IE) and curl. If the problem persists, sync the time on the server again and regenerate the certificate. If that doesn't help, post your complete nginx.conf, the output of "nginx -V" and "uname -a", and the the version of OpenSSL on the server where you compiled nginx. Max From quintinpar at gmail.com Thu Feb 9 03:06:16 2012 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 9 Feb 2012 08:36:16 +0530 Subject: HA cookie-based routing in the cloud In-Reply-To: <73b3ad009f2ed800947c97c91d47bb85.NginxMailingListEnglish@forum.nginx.org> References: <042d44f180056fea5e93107388d16caf.NginxMailingListEnglish@forum.nginx.org> <73b3ad009f2ed800947c97c91d47bb85.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://notmysock.org/blog/hacks/haproxy-user-repinning.html On Wed, Feb 8, 2012 at 10:33 PM, Langley wrote: > Is this the right forum to ask questions about the nginx-sticky-module? > I checked here: http://code.google.com/p/nginx-sticky-module/ and didn't > see any mailing list, forum or google group. > > TIA > > -- Langley > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222088,222120#msg-222120 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Thu Feb 9 03:28:57 2012 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 9 Feb 2012 08:58:57 +0530 Subject: Invalidate just one page under a location directive. In-Reply-To: <87zkcts12t.wl%appa@perusio.net> References: <201202081306.22590.ne@vbart.ru> <871uq5ti7t.wl%appa@perusio.net> <87zkcts12t.wl%appa@perusio.net> Message-ID: Thanks Antonio. Great deal of help. Looking into the wiki I see ?The following response headers flag a response as uncacheable unless they are ignored : Set-Cookie Cache-Control containing "no-cache", "no-store", "private", or a "max-age" with a non-numeric or 0 value Expires with a time in the past X-Accel-Expires: 0? This also means that if I send a curl call from my backend with a Cache-Control : max-age=1, then the corresponding request/page in cache will get invalidated the next second. X-Accel-Expires also does the same thing I suppose. That solves my dilemma. -Quintin On Wed, Feb 8, 2012 at 10:40 PM, Ant?nio P. P. Almeida wrote: > On 8 Fev 2012 16h22 WET, quintinpar at gmail.com wrote: > > > Invalidate a cache page using the bypass mechanism > > proxy_cache_bypass, not expiration > > IMHO the best option would be from the application side send a > 'X-Accel-Expires: 0' header. But this needs to be done at the *app > level*. > > Unless you place something in the headers that the application returns > Nginx there's no way of Nginx to know what is going on. > > > To put it simply, Invalidate cache for a particular key entry but > > not all of the cache > > > e.g for > > > > location = /vehicles { > > > > proxy_cache_key $request_uri; > > > > } > > > > Hashtable for cache: > > > > [ > > > > http://jj.com/vehicles/cars => cached page for cars > > > > http://jj.com/vehicles/bus => cached page for bus > > > > http://jj.com/vehicles/bike=> cached page for bike > > + 10,000 key, values > > ] > > > > I should be able to invalidate just "cars" because my page for cars > > have changed but not others. Say a person added a comment in the > > cars page. Another analogy is to invalidate cache for a single blog > > post when all the blog posts have been cached under a single > > location directive. -Quintin > > Send a 'X-Accel-Expires: 0' header from the application at: > > http://jj.com/vehicles/cars > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 9 06:10:04 2012 From: nginx-forum at nginx.us (zealot83) Date: Thu, 09 Feb 2012 01:10:04 -0500 Subject: 504 SSL_do_handshake() failed In-Reply-To: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> References: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38358fccf41f6376d3da1883cf020b66.NginxMailingListEnglish@forum.nginx.org> Does [OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008] have any problem with nginx-0.8.54? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222128,222133#msg-222133 From quintinpar at gmail.com Thu Feb 9 07:04:33 2012 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 9 Feb 2012 12:34:33 +0530 Subject: Old thread: Cache for non-cookie users and fresh for cookie users Message-ID: Picking up an old thread for caching http://nginx.2469901.n2.nabble.com/Help-cache-or-not-by-cookie-td3124462.html Igor talks about caching by ?No, currently the single way is: 1) add the cookie in proxy_cache_key proxy_cache_key "http://cacheserver$request_uri $cookie_name"; 2) add "X-Accel-Expires: 0" in response with the cookie.? But from my understanding of ?*X-Accel-Expires? *it expires the cache in the cache repository as given below ?Sets when to expire the file in the internal Nginx cache, if one is used.? Does this not mean that when I set cookie and pass ?X-Accel-Expires: 0? it expires the cache for the non logged in user too, for that cache key? A new cache entry will then have to be created, right? Should I go with ?Cache-Control: max-age=0? approach? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emakyol at gmail.com Thu Feb 9 08:30:36 2012 From: emakyol at gmail.com (Engin Akyol) Date: Thu, 9 Feb 2012 02:30:36 -0600 Subject: Perl interaction with HTTP Proxy: Message-ID: Hey guys, I'm trying to write a couple of perl modules for my nginx server that is acting as an HTTP Proxy. What I'd like to do is use Perl to perform some (quick) checks and then return an error code, but I'm noticing that if I execute any perl code before the config block that engages the http_proxy, the perl error code is ignored. If I execute the perl block after the http_proxy, then the connection isn't proxy passed at all. Is this a config issue on my part or is this normal behavior with regards to perl. If it's normal behavior, how can I use perl modules to perform checks with http_proxy features enabled? /Engin From mdounin at mdounin.ru Thu Feb 9 08:37:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Feb 2012 12:37:35 +0400 Subject: Perl interaction with HTTP Proxy: In-Reply-To: References: Message-ID: <20120209083735.GC67687@mdounin.ru> Hello! On Thu, Feb 09, 2012 at 02:30:36AM -0600, Engin Akyol wrote: > Hey guys, I'm trying to write a couple of perl modules for my nginx > server that is acting as an HTTP Proxy. > > What I'd like to do is use Perl to perform some (quick) checks and > then return an error code, but I'm noticing that if I execute any perl > code before the config block that engages the http_proxy, the perl > error code is ignored. If I execute the perl block after the > http_proxy, then the connection isn't proxy passed at all. Is this a > config issue on my part or is this normal behavior with regards to > perl. If it's normal behavior, how can I use perl modules to perform > checks with http_proxy features enabled? Both perl and proxy set exclusive location handlers, and they can't be used together in the same location. If you need to check requests with perl and then proxy_pass somewhere, you have to use $r->internal_redirect(uri) in perl to pass processing to another location with proxy_pass configured. Maxim Dounin From emakyol at gmail.com Thu Feb 9 08:40:20 2012 From: emakyol at gmail.com (Engin Akyol) Date: Thu, 9 Feb 2012 02:40:20 -0600 Subject: Perl interaction with HTTP Proxy: In-Reply-To: <20120209083735.GC67687@mdounin.ru> References: <20120209083735.GC67687@mdounin.ru> Message-ID: Awesome! Thank you. Going to experiment for a few more hours with this. /Engin On Thu, Feb 9, 2012 at 2:37 AM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 09, 2012 at 02:30:36AM -0600, Engin Akyol wrote: > >> Hey guys, I'm trying to write a couple of perl modules for my nginx >> server that is acting as an HTTP Proxy. >> >> What I'd like to do is use Perl to perform some (quick) checks and >> then return an error code, but I'm noticing that if I execute any perl >> code before the config block that engages the http_proxy, the perl >> error code is ignored. ?If I execute the perl block after the >> http_proxy, then the connection isn't proxy passed at all. ?Is this a >> config issue on my part or is this normal behavior with regards to >> perl. ?If it's normal behavior, how can I use perl modules to perform >> checks with http_proxy features enabled? > > Both perl and proxy set exclusive location handlers, and they > can't be used together in the same location. > > If you need to check requests with perl and then proxy_pass > somewhere, you have to use $r->internal_redirect(uri) in perl to > pass processing to another location with proxy_pass configured. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Feb 9 08:49:50 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Feb 2012 12:49:50 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: Message-ID: <20120209084950.GD67687@mdounin.ru> Hello! On Thu, Feb 09, 2012 at 12:34:33PM +0530, Quintin Par wrote: > Picking up an old thread for caching > > http://nginx.2469901.n2.nabble.com/Help-cache-or-not-by-cookie-td3124462.html > > Igor talks about caching by > > ?No, currently the single way is: > > 1) add the cookie in proxy_cache_key > > proxy_cache_key "http://cacheserver$request_uri $cookie_name"; > > 2) add "X-Accel-Expires: 0" in response with the cookie.? > > But from my understanding of ?*X-Accel-Expires? *it expires the cache in > the cache repository as given below > > ?Sets when to expire the file in the internal Nginx cache, if one is used.? > > Does this not mean that when I set cookie and pass ?X-Accel-Expires: 0? it > expires the cache for the non logged in user too, for that cache key? A new > cache entry will then have to be created, right? No. X-Accel-Expires will prevent the particular response from being cached, but won't delete existing cache entry. > > Should I go with ?Cache-Control: max-age=0? approach? The only difference between "X-Accel-Expires: 0" and "Cache-Contro: max-age=0" is that the former won't be passed to client. As for the use-case in general (i.e. only use cache for users without cookie), in recent versions it is enough to do proxy_cache_bypass $cookie_name; proxy_no_cache $cookie_name; I.e.: don't respond from cache to users with cookie (proxy_cache_bypass), don't store to cache responses for users with cookie (proxy_no_cache). Moreover, responses with Set-Cookie won't be cached by default, too. So basically just placing the above into config is enough, no further changes to a backend code required. Maxim Dounin From nginxyz at mail.ru Thu Feb 9 09:15:04 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 09 Feb 2012 13:15:04 +0400 Subject: 504 SSL_do_handshake() failed In-Reply-To: <38358fccf41f6376d3da1883cf020b66.NginxMailingListEnglish@forum.nginx.org> References: <91ab5977ecf38fd8a5d7cbeace409050.NginxMailingListEnglish@forum.nginx.org> <38358fccf41f6376d3da1883cf020b66.NginxMailingListEnglish@forum.nginx.org> Message-ID: 09 ??????? 2012, 10:10 ?? "zealot83" : > Does [OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008] have any problem with > nginx-0.8.54? There have been a couple of OpenSSL-related bugfixes since, so try upgrading both OpenSSL and nginx first. You should definitely upgrade OpenSSL because the version you're using has a serious vulnerability: http://www.cvedetails.com/cve/CVE-2010-4180/ Max From nginx-forum at nginx.us Thu Feb 9 11:55:51 2012 From: nginx-forum at nginx.us (keef) Date: Thu, 09 Feb 2012 06:55:51 -0500 Subject: Unable log out of some applications. Message-ID: <3a8c5e247cd99c27f9bc657ae5dec59d.NginxMailingListEnglish@forum.nginx.org> We have a at least a two web applications (Zenoss & Sharepoint) behind nginx that we are unable to log out off! We have many other websites (>200) also behind nginx that don't have this issue. Before posting the configuration I was wondering if anyone could take a guess at what the problem might be ? Thanks Keith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222147,222147#msg-222147 From nginxyz at mail.ru Thu Feb 9 14:34:03 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 09 Feb 2012 18:34:03 +0400 Subject: Unable log out of some applications. In-Reply-To: <3a8c5e247cd99c27f9bc657ae5dec59d.NginxMailingListEnglish@forum.nginx.org> References: <3a8c5e247cd99c27f9bc657ae5dec59d.NginxMailingListEnglish@forum.nginx.org> Message-ID: 09 ??????? 2012, 15:56 ?? "keef" : > We have a at least a two web applications (Zenoss & Sharepoint) behind > nginx that we are unable to log out off! > > We have many other websites (>200) also behind nginx that don't have > this issue. Before posting the configuration I was wondering if anyone > could take a guess at what the problem might be ? Your nginx configuration is probably missing the following directives: proxy_cache_bypass $http_authorization; proxy_no_cache $http_authorization; If you have those set, clear your browser's cache and try using different browsers. If that doesn't help, post your complete nginx.conf. Max From nginx-forum at nginx.us Thu Feb 9 16:28:49 2012 From: nginx-forum at nginx.us (keef) Date: Thu, 09 Feb 2012 11:28:49 -0500 Subject: Unable log out of some applications. In-Reply-To: References: Message-ID: Thanks for the Info Max. We're stuck with a nginx v0.8..x.x that doesn't includes the"proxy_cache_bypass" directive so are not able to test your solution justnow. I have a workaround just now and will try the directive again once we upgrade nginx. Thanks Keith Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222147,222158#msg-222158 From nginxyz at mail.ru Thu Feb 9 17:40:23 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 09 Feb 2012 21:40:23 +0400 Subject: Unable log out of some applications. In-Reply-To: References: Message-ID: 09 ??????? 2012, 20:29 ?? "keef" : > Thanks for the Info Max. We're stuck with a nginx v0.8..x.x that doesn't > includes the"proxy_cache_bypass" directive so are not able to test your > solution justnow. I have a workaround just now and will try the > directive again once we upgrade nginx. You're welcome. You can try the following workaround in the meantime: location / { recursive_error_pages on; error_page 409 = @no_cache; if ($http_authorization) { return 409; } proxy_cache zone; proxy_pass $scheme://backend; } location @no_cache { proxy_pass $scheme://backend; } Max From baishen.lists at gmail.com Thu Feb 9 17:42:47 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 12:42:47 -0500 Subject: Binding nginx to a single interface Message-ID: I have two nics on my server. I have nginx set to listen on one of them using "listen 10.1.2.3 80" However, it keeps listening on the other ip as well. So i'm unable to start anything else on port 80 on that interface. Any ideas what could be wrong? Everything I'm finding says that the listen command is all I needed to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at navarro.at Thu Feb 9 17:45:02 2012 From: adrian at navarro.at (=?utf-8?Q?Adri=C3=A1n_Navarro?=) Date: Thu, 9 Feb 2012 18:45:02 +0100 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: listen ip:port; or listen ip port;? actually in that case it would be listening on default for the ip AND for the port as an address (weird scenario). version? -a. On Thursday 9 de February de 2012 at 18:42, Bai Shen wrote: > I have two nics on my server. I have nginx set to listen on one of them using "listen 10.1.2.3 80" However, it keeps listening on the other ip as well. So i'm unable to start anything else on port 80 on that interface. > > Any ideas what could be wrong? Everything I'm finding says that the listen command is all I needed to do. > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baishen.lists at gmail.com Thu Feb 9 17:49:16 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 12:49:16 -0500 Subject: Config file order Message-ID: I know the server order in the config file makes a difference. Does the order of the statements inside the location tags make a difference? Do I need to set the headers before doing a proxy_pass or does it matter? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From baishen.lists at gmail.com Thu Feb 9 17:50:13 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 12:50:13 -0500 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: listen ip:port; I realized after I sent the email that I'd typoed that. It's the newest version, 1.0.11 On Thu, Feb 9, 2012 at 12:45 PM, Adri?n Navarro wrote: > listen ip:port; or listen ip port;? > > actually in that case it would be listening on default for the ip AND for > the port as an address (weird scenario). > > version? > > -a. > > On Thursday 9 de February de 2012 at 18:42, Bai Shen wrote: > > I have two nics on my server. I have nginx set to listen on one of them > using "listen 10.1.2.3 80" However, it keeps listening on the other ip as > well. So i'm unable to start anything else on port 80 on that interface. > > Any ideas what could be wrong? Everything I'm finding says that the > listen command is all I needed to do. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian at navarro.at Thu Feb 9 17:51:40 2012 From: adrian at navarro.at (=?utf-8?Q?Adri=C3=A1n_Navarro?=) Date: Thu, 9 Feb 2012 18:51:40 +0100 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: then grep your whole config and included files/config folders for any listen statements, there must be something in there. i'm currently using the same version with multiple interfaces and different servers on each one and it's working fine. -a. On Thursday 9 de February de 2012 at 18:50, Bai Shen wrote: > listen ip:port; > > I realized after I sent the email that I'd typoed that. It's the newest version, 1.0.11 > > On Thu, Feb 9, 2012 at 12:45 PM, Adri?n Navarro wrote: > > listen ip:port; or listen ip port;? > > > > actually in that case it would be listening on default for the ip AND for the port as an address (weird scenario). > > > > version? > > > > -a. > > > > On Thursday 9 de February de 2012 at 18:42, Bai Shen wrote: > > > > > > > I have two nics on my server. I have nginx set to listen on one of them using "listen 10.1.2.3 80" However, it keeps listening on the other ip as well. So i'm unable to start anything else on port 80 on that interface. > > > > > > Any ideas what could be wrong? Everything I'm finding says that the listen command is all I needed to do. > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org (mailto:nginx at nginx.org) > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org (mailto:nginx at nginx.org) > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 9 17:54:33 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Feb 2012 21:54:33 +0400 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: <20120209175433.GO67687@mdounin.ru> Hello! On Thu, Feb 09, 2012 at 12:42:47PM -0500, Bai Shen wrote: > I have two nics on my server. I have nginx set to listen on one of them > using "listen 10.1.2.3 80" However, it keeps listening on the other ip as > well. So i'm unable to start anything else on port 80 on that interface. > > Any ideas what could be wrong? Everything I'm finding says that the listen > command is all I needed to do. Make sure all server{} blocks in your config have the above listen explicitly specified. If server{} block have no listen directives at all, nginx will use "listen 80" by default, and this may be a culprit. Maxim Dounin From mdounin at mdounin.ru Thu Feb 9 17:59:02 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Feb 2012 21:59:02 +0400 Subject: Config file order In-Reply-To: References: Message-ID: <20120209175902.GP67687@mdounin.ru> Hello! On Thu, Feb 09, 2012 at 12:49:16PM -0500, Bai Shen wrote: > I know the server order in the config file makes a difference. Does the > order of the statements inside the location tags make a difference? > > Do I need to set the headers before doing a proxy_pass or does it matter? In most cases order doesn't matter. There are some special cases though, most notably regexp locations and rewrite module directives, where order matters. In such special cases order of processing is usually explicitly documented. As for proxy_pass and proxy_set_header - order doesn't matter. Maxim Dounin From baishen.lists at gmail.com Thu Feb 9 18:03:13 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 13:03:13 -0500 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: I'm using the default nginx.conf file with no changes. I've added my own config file in the conf.d directory. Inside that I have two server {} blocks. They both have "listen 10.1.2.3:80" and different server_names. Yet when I telnet to 10.1.2.4 on port 80, I get a response from nginx. On Thu, Feb 9, 2012 at 12:51 PM, Adri?n Navarro wrote: > then grep your whole config and included files/config folders for any > listen statements, there must be something in there. i'm currently using > the same version with multiple interfaces and different servers on each one > and it's working fine. > > -a. > > On Thursday 9 de February de 2012 at 18:50, Bai Shen wrote: > > listen ip:port; > > I realized after I sent the email that I'd typoed that. It's the newest > version, 1.0.11 > > On Thu, Feb 9, 2012 at 12:45 PM, Adri?n Navarro wrote: > > listen ip:port; or listen ip port;? > > actually in that case it would be listening on default for the ip AND for > the port as an address (weird scenario). > > version? > > -a. > > On Thursday 9 de February de 2012 at 18:42, Bai Shen wrote: > > I have two nics on my server. I have nginx set to listen on one of them > using "listen 10.1.2.3 80" However, it keeps listening on the other ip as > well. So i'm unable to start anything else on port 80 on that interface. > > Any ideas what could be wrong? Everything I'm finding says that the > listen command is all I needed to do. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baishen.lists at gmail.com Thu Feb 9 18:06:14 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 13:06:14 -0500 Subject: Binding nginx to a single interface In-Reply-To: <20120209175433.GO67687@mdounin.ru> References: <20120209175433.GO67687@mdounin.ru> Message-ID: They do. However, I do have some weird behaviour. I have the server_name set to www.example.com and that correctly connects me to my web server. But if I type in 10.1.2.3, that connects me to my web server as well, even though I don't have a default rule setup. When I go to 10.1.2.4 I get a "Welcome to nginx!" page. On Thu, Feb 9, 2012 at 12:54 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 09, 2012 at 12:42:47PM -0500, Bai Shen wrote: > > > I have two nics on my server. I have nginx set to listen on one of them > > using "listen 10.1.2.3 80" However, it keeps listening on the other ip > as > > well. So i'm unable to start anything else on port 80 on that interface. > > > > Any ideas what could be wrong? Everything I'm finding says that the > listen > > command is all I needed to do. > > Make sure all server{} blocks in your config have the above listen > explicitly specified. If server{} block have no listen directives > at all, nginx will use "listen 80" by default, and this may be a > culprit. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baishen.lists at gmail.com Thu Feb 9 18:07:17 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 13:07:17 -0500 Subject: Config file order In-Reply-To: <20120209175902.GP67687@mdounin.ru> References: <20120209175902.GP67687@mdounin.ru> Message-ID: That's what I thought, but I wasn't sure. I'm trying to get the remote IP to show up in my web server logs and wanted to make sure nginx was sending it. On Thu, Feb 9, 2012 at 12:59 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 09, 2012 at 12:49:16PM -0500, Bai Shen wrote: > > > I know the server order in the config file makes a difference. Does the > > order of the statements inside the location tags make a difference? > > > > Do I need to set the headers before doing a proxy_pass or does it matter? > > In most cases order doesn't matter. There are some special cases > though, most notably regexp locations and rewrite module > directives, where order matters. In such special cases order of > processing is usually explicitly documented. > > As for proxy_pass and proxy_set_header - order doesn't matter. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baishen.lists at gmail.com Thu Feb 9 18:12:57 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Thu, 9 Feb 2012 13:12:57 -0500 Subject: Binding nginx to a single interface In-Reply-To: References: Message-ID: Found it. There was a default.conf file in there that had "listen 80" Thanks. On Thu, Feb 9, 2012 at 12:51 PM, Adri?n Navarro wrote: > then grep your whole config and included files/config folders for any > listen statements, there must be something in there. i'm currently using > the same version with multiple interfaces and different servers on each one > and it's working fine. > > -a. > > On Thursday 9 de February de 2012 at 18:50, Bai Shen wrote: > > listen ip:port; > > I realized after I sent the email that I'd typoed that. It's the newest > version, 1.0.11 > > On Thu, Feb 9, 2012 at 12:45 PM, Adri?n Navarro wrote: > > listen ip:port; or listen ip port;? > > actually in that case it would be listening on default for the ip AND for > the port as an address (weird scenario). > > version? > > -a. > > On Thursday 9 de February de 2012 at 18:42, Bai Shen wrote: > > I have two nics on my server. I have nginx set to listen on one of them > using "listen 10.1.2.3 80" However, it keeps listening on the other ip as > well. So i'm unable to start anything else on port 80 on that interface. > > Any ideas what could be wrong? Everything I'm finding says that the > listen command is all I needed to do. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 9 22:43:48 2012 From: nginx-forum at nginx.us (jwilson) Date: Thu, 09 Feb 2012 17:43:48 -0500 Subject: Adding "Link" Header Message-ID: <57ccc441f060818b40825d7690fc7dd2.NginxMailingListEnglish@forum.nginx.org> To avoid SEO issues on my subdomains, I need to add a "Link" header to support Google's canonical header. The format is as follows (from http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html): Link: ; rel="canonical" So, the add_header command would need to reference $uri, but it also needs a semi-colon and double quotes. How can I add this header to my config? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222185,222185#msg-222185 From appa at perusio.net Thu Feb 9 23:27:30 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 09 Feb 2012 23:27:30 +0000 Subject: Adding "Link" Header In-Reply-To: <57ccc441f060818b40825d7690fc7dd2.NginxMailingListEnglish@forum.nginx.org> References: <57ccc441f060818b40825d7690fc7dd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87hayzsi3h.wl%appa@perusio.net> On 9 Fev 2012 22h43 WET, nginx-forum at nginx.us wrote: > To avoid SEO issues on my subdomains, I need to add a "Link" header > to support Google's canonical header. The format is as follows > (from > http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html): > > Link: ; rel="canonical" > > So, the add_header command would need to reference $uri, but it also > needs a semi-colon and double quotes. How can I add this header to > my config? add_header Link "<$scheme://$http_host$request_uri>; rel=\"canonical\""; --- appa From nginxyz at mail.ru Thu Feb 9 23:31:46 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 10 Feb 2012 03:31:46 +0400 Subject: Adding "Link" Header In-Reply-To: <57ccc441f060818b40825d7690fc7dd2.NginxMailingListEnglish@forum.nginx.org> References: <57ccc441f060818b40825d7690fc7dd2.NginxMailingListEnglish@forum.nginx.org> Message-ID: 10 ??????? 2012, 02:44 ?? "jwilson" : > To avoid SEO issues on my subdomains, I need to add a "Link" header to > support Google's canonical header. The format is as follows (from > http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html): > > Link: ; rel="canonical" > > So, the add_header command would need to reference $uri, but it also > needs a semi-colon and double quotes. How can I add this header to my > config? location ~* ^/(.*)(\.pdf)$ { add_header Link "<$scheme://$host:$server_port/$1.html>; rel=\"canonical\""; } You can use $http_host instead of $host:$server_port, but if you get a request without the Host header, $http_host will be empty. Max From mdounin at mdounin.ru Thu Feb 9 23:34:20 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 03:34:20 +0400 Subject: Binding nginx to a single interface In-Reply-To: References: <20120209175433.GO67687@mdounin.ru> Message-ID: <20120209233420.GT67687@mdounin.ru> Hello! On Thu, Feb 09, 2012 at 01:06:14PM -0500, Bai Shen wrote: > They do. > > However, I do have some weird behaviour. I have the server_name set to > www.example.com and that correctly connects me to my web server. But if I > type in 10.1.2.3, that connects me to my web server as well, even though I > don't have a default rule setup. > > When I go to 10.1.2.4 I get a "Welcome to nginx!" page. When selecting server{} based on server_name nginx will look only through server{} blocks which have the listen socket defined. That is, if you have server { listen 80; server_name default; } server { listen 10.1.2.3:80; server_name example.com; } nginx will never consider "default" server if connection comes to 10.1.2.3:80. All requests to 10.1.2.3:80 will end up in "example.com" server as it's the only server defined for the listen socket in question. More details may be found here: http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers and in docs. Maxim Dounin From mikemc-nginx at terabytemedia.com Thu Feb 9 23:37:19 2012 From: mikemc-nginx at terabytemedia.com (Michael McCallister) Date: Thu, 09 Feb 2012 16:37:19 -0700 Subject: filter out headers for fastcgi cache Message-ID: <4F3458AF.9080507@terabytemedia.com> Greetings, I would like to propose that an additional feature for nginx be considered related to fastcgi caching: the ability to specify headers that will not be stored in cache, but will be sent to the client when the cached version is created (sent to the client responsible for creating the cached copy). If some solution already exists providing this functionality, my apologies, I was not able to track one down - currently assuming one does not exist. Here is one scenario where such an option would be useful (I am sure there are others): A typical scenario where fastcgi caching can be employed for performance benefits is when the default version of a page is loaded. By "default", I mean the client has no prior session data which might result in unique session specific request elements. In the case of PHP, the presence of session data is typically determined by checking for the presence of a "PHPSESSID" cookie. So if this cookie does not exist, then it can be assumed there is no session - an optimal time to create a cached version of the page in many scenarios. However, many PHP apps/sites/etc. also create a session in the event one does not exist (a behavior I assume is not specific to PHP) - meaning the the response typically contains a Set-Cookie: PHPSESSID.... header. Nginx's default behavior is not to cache any page with a Set-Cookie header, and that makes sense as a default - but lets assume for this example that fastcgi_ignore_headers Set-Cookie; is in effect and the cached version of the default version of the page gets created. The problem here is that the cached version created also has the Set-Cookie header cached as well - which causes problems for obvious reasons (hands out the same session ID/cookie for all users). Ideally, we could specify to cache everything except the specified headers - Set-Cookie in this example. Am I the only one who would find this option useful or are there others? If this would be found to be useful by others and there is consensus that such an addition is called for by those with the resources to implement it, I am not sure if it makes sense to strip the headers prior storage in cache or when reading from cache - probably whichever is easier. Thoughts? Michael From appa at perusio.net Thu Feb 9 23:45:57 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 09 Feb 2012 23:45:57 +0000 Subject: filter out headers for fastcgi cache In-Reply-To: <4F3458AF.9080507@terabytemedia.com> References: <4F3458AF.9080507@terabytemedia.com> Message-ID: <87fwejsh8q.wl%appa@perusio.net> On 9 Fev 2012 23h37 WET, mikemc-nginx at terabytemedia.com wrote: > Greetings, > > I would like to propose that an additional feature for nginx be > considered related to fastcgi caching: the ability to specify > headers that will not be stored in cache, but will be sent to the > client when the cached version is created (sent to the client > responsible for creating the cached copy). If some solution already > exists providing this functionality, my apologies, I was not able to > track one down - currently assuming one does not exist. > > Here is one scenario where such an option would be useful (I am sure > there are others): > > A typical scenario where fastcgi caching can be employed for > performance benefits is when the default version of a page is > loaded. By "default", I mean the client has no prior session data > which might result in unique session specific request elements. In > the case of PHP, the presence of session data is typically > determined by checking for the presence of a "PHPSESSID" cookie. So > if this cookie does not exist, then it can be assumed there is no > session - an optimal time to create a cached version of the page in > many scenarios. However, many PHP apps/sites/etc. also create a > session in the event one does not exist (a behavior I assume is not > specific to PHP) - meaning the the response typically contains a > Set-Cookie: PHPSESSID.... header. Nginx's default behavior is not > to cache any page with a Set-Cookie header, and that makes sense as > a default - but lets assume for this example that > fastcgi_ignore_headers Set-Cookie; is in effect and the cached > version of the default version of the page gets created. The > problem here is that the cached version created also has the > Set-Cookie header cached as well - which causes problems for obvious > reasons (hands out the same session ID/cookie for all users). > Ideally, we could specify to cache everything except the specified > headers - Set-Cookie in this example. > > Am I the only one who would find this option useful or are there > others? > > If this would be found to be useful by others and there is consensus > that such an addition is called for by those with the resources to > implement it, I am not sure if it makes sense to strip the headers > prior storage in cache or when reading from cache - probably > whichever is easier. > > Thoughts? > Michael http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_pass_header Also, you can capture the session ID and use it in fastcgi_cache_key. Like this, for example: ## Set a cache_uid variable for authenticated users. map $http_cookie $cache_uid { default nil; # hommage to Lisp :) ~SESS[[:alnum:]]+=(?[[:alnum:]]+) $session_id; } fastcgi_cache_key $cache_uid@$host$request_uri; --- appa From nginx-forum at nginx.us Fri Feb 10 04:25:37 2012 From: nginx-forum at nginx.us (rishabh) Date: Thu, 09 Feb 2012 23:25:37 -0500 Subject: Sending Traffic to another Server/Port/IP asynchronously Message-ID: <99a0ae2be9961e64f44ac416ec1ede38.NginxMailingListEnglish@forum.nginx.org> I have nginx as the HTTP server. I wanted to write a module which asynchronously send all the traffic to another IP or PORT where my analytics server will be running. I dont want to create any lag in serving HTTP requests. Is there any existing module which I can use as a base and modify to meet my requirements. Any help would be greatly appreciated. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222196,222196#msg-222196 From zzz at zzz.org.ua Fri Feb 10 04:51:01 2012 From: zzz at zzz.org.ua (Alexandr Gomoliako) Date: Fri, 10 Feb 2012 06:51:01 +0200 Subject: Sending Traffic to another Server/Port/IP asynchronously In-Reply-To: <99a0ae2be9961e64f44ac416ec1ede38.NginxMailingListEnglish@forum.nginx.org> References: <99a0ae2be9961e64f44ac416ec1ede38.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Feb 10, 2012 at 6:25 AM, rishabh wrote: > I have nginx as the HTTP server. > > I wanted to write a module which asynchronously send all the traffic to > another IP or PORT where my analytics server will be running. I dont > want to create any lag in serving HTTP requests. > > Is there any existing module which I can use as a base and modify to > meet my requirements. > > Any help would be greatly appreciated. > > Thanks Try post_action From mikemc-nginx at terabytemedia.com Fri Feb 10 05:20:26 2012 From: mikemc-nginx at terabytemedia.com (Michael McCallister) Date: Thu, 09 Feb 2012 22:20:26 -0700 Subject: filter out headers for fastcgi cache In-Reply-To: <87fwejsh8q.wl%appa@perusio.net> References: <4F3458AF.9080507@terabytemedia.com> <87fwejsh8q.wl%appa@perusio.net> Message-ID: <4F34A91A.9030402@terabytemedia.com> Ant?nio P. P. Almeida wrote, On 02/09/2012 04:45 PM: > On 9 Fev 2012 23h37 WET, mikemc-nginx at terabytemedia.com wrote: > >> Greetings, >> >> I would like to propose that an additional feature for nginx be >> considered related to fastcgi caching: the ability to specify >> headers that will not be stored in cache, but will be sent to the >> client when the cached version is created (sent to the client >> responsible for creating the cached copy). If some solution already >> exists providing this functionality, my apologies, I was not able to >> track one down - currently assuming one does not exist. >> >> Here is one scenario where such an option would be useful (I am sure >> there are others): >> >> A typical scenario where fastcgi caching can be employed for >> performance benefits is when the default version of a page is >> loaded. By "default", I mean the client has no prior session data >> which might result in unique session specific request elements. In >> the case of PHP, the presence of session data is typically >> determined by checking for the presence of a "PHPSESSID" cookie. So >> if this cookie does not exist, then it can be assumed there is no >> session - an optimal time to create a cached version of the page in >> many scenarios. However, many PHP apps/sites/etc. also create a >> session in the event one does not exist (a behavior I assume is not >> specific to PHP) - meaning the the response typically contains a >> Set-Cookie: PHPSESSID.... header. Nginx's default behavior is not >> to cache any page with a Set-Cookie header, and that makes sense as >> a default - but lets assume for this example that >> fastcgi_ignore_headers Set-Cookie; is in effect and the cached >> version of the default version of the page gets created. The >> problem here is that the cached version created also has the >> Set-Cookie header cached as well - which causes problems for obvious >> reasons (hands out the same session ID/cookie for all users). >> Ideally, we could specify to cache everything except the specified >> headers - Set-Cookie in this example. >> >> Am I the only one who would find this option useful or are there >> others? >> >> If this would be found to be useful by others and there is consensus >> that such an addition is called for by those with the resources to >> implement it, I am not sure if it makes sense to strip the headers >> prior storage in cache or when reading from cache - probably >> whichever is easier. >> >> Thoughts? >> Michael > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_pass_header > > Also, you can capture the session ID and use it in > fastcgi_cache_key. Like this, for example: > > ## Set a cache_uid variable for authenticated users. > map $http_cookie $cache_uid { > default nil; # hommage to Lisp :) > ~SESS[[:alnum:]]+=(?[[:alnum:]]+) $session_id; > } > > fastcgi_cache_key $cache_uid@$host$request_uri; > > --- appa How does fastcgi_pass_header help in this instance? From the docs, that directive "explicitly allows to pass named headers to the client." In the example, I need to drop the Set-Cookie header from the cached copy. As far as setting up cache files for each session, that is less useful for me (but a cool feature). The main benefit in caching for me is for users that are not logged in (or doing anything that modifies site content) since they represent around 75% of page loads and there is no user specific variance in the pages themselves. I would suspect many other sites follow a similar usage pattern. From nginx-forum at nginx.us Fri Feb 10 06:20:15 2012 From: nginx-forum at nginx.us (rishabh) Date: Fri, 10 Feb 2012 01:20:15 -0500 Subject: Sending Traffic to another Server/Port/IP asynchronously In-Reply-To: <99a0ae2be9961e64f44ac416ec1ede38.NginxMailingListEnglish@forum.nginx.org> References: <99a0ae2be9961e64f44ac416ec1ede38.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks, post_action works like a charm. Just one problem. Nothing is getting logged access_log if i use post_action ! whats the co-relation ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222196,222200#msg-222200 From nginx-forum at nginx.us Fri Feb 10 06:29:26 2012 From: nginx-forum at nginx.us (rishabh) Date: Fri, 10 Feb 2012 01:29:26 -0500 Subject: nginx ignores access_log directive when post_action specifie In-Reply-To: <4634c1b132a50aae01b4a3d85d7e0a19@ruby-forum.com> References: <4634c1b132a50aae01b4a3d85d7e0a19@ruby-forum.com> Message-ID: <23a541feafb0510a8becf4598d1a8003.NginxMailingListEnglish@forum.nginx.org> I am facing the same problem, is there any update on this issue ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,92464,222201#msg-222201 From mdounin at mdounin.ru Fri Feb 10 07:31:21 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 11:31:21 +0400 Subject: nginx ignores access_log directive when post_action specifie In-Reply-To: <23a541feafb0510a8becf4598d1a8003.NginxMailingListEnglish@forum.nginx.org> References: <4634c1b132a50aae01b4a3d85d7e0a19@ruby-forum.com> <23a541feafb0510a8becf4598d1a8003.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120210073121.GU67687@mdounin.ru> Hello! On Fri, Feb 10, 2012 at 01:29:26AM -0500, rishabh wrote: > I am facing the same problem, is there any update on this issue ? Logging happens in a location where request completes, and with post_action it's the location where post_action processed. So the problem looks like configuration one. Maxim Dounin From quintinpar at gmail.com Fri Feb 10 08:34:39 2012 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 10 Feb 2012 14:04:39 +0530 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <20120209084950.GD67687@mdounin.ru> References: <20120209084950.GD67687@mdounin.ru> Message-ID: On Thu, Feb 9, 2012 at 2:19 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 09, 2012 at 12:34:33PM +0530, Quintin Par wrote: > > > Picking up an old thread for caching > > > > > http://nginx.2469901.n2.nabble.com/Help-cache-or-not-by-cookie-td3124462.html > > > > Igor talks about caching by > > > > ?No, currently the single way is: > > > > 1) add the cookie in proxy_cache_key > > > > proxy_cache_key "http://cacheserver$request_uri $cookie_name"; > > > > 2) add "X-Accel-Expires: 0" in response with the cookie.? > > > > But from my understanding of ?*X-Accel-Expires? *it expires the cache in > > the cache repository as given below > > > > ?Sets when to expire the file in the internal Nginx cache, if one is > used.? > > > > Does this not mean that when I set cookie and pass ?X-Accel-Expires: 0? > it > > expires the cache for the non logged in user too, for that cache key? A > new > > cache entry will then have to be created, right? > > No. X-Accel-Expires will prevent the particular response from > being cached, but won't delete existing cache entry. > So what should I do to delete a particular cache entry? > > > > > Should I go with ?Cache-Control: max-age=0? approach? > > The only difference between "X-Accel-Expires: 0" and > "Cache-Contro: max-age=0" is that the former won't be passed to > client. > > As for the use-case in general (i.e. only use cache for users > without cookie), in recent versions it is enough to do > > proxy_cache_bypass $cookie_name; > proxy_no_cache $cookie_name; > > I.e.: don't respond from cache to users with cookie > (proxy_cache_bypass), don't store to cache responses for users > with cookie (proxy_no_cache). > > Moreover, responses with Set-Cookie won't be cached by default, > too. So basically just placing the above into config is enough, > no further changes to a backend code required. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.sikora at frickle.com Fri Feb 10 08:49:32 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Fri, 10 Feb 2012 09:49:32 +0100 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <20120209084950.GD67687@mdounin.ru> Message-ID: <27F4AD6A344C4B7193174121EF830336@Desktop> Hi, > So what should I do to delete a particular cache entry? Valentin already pointed you to the solution 2 days ago: http://labs.frickle.com/nginx_ngx_cache_purge/ Best regards, Piotr Sikora < piotr.sikora at frickle.com > From nginx-forum at nginx.us Fri Feb 10 09:01:16 2012 From: nginx-forum at nginx.us (trojan2748) Date: Fri, 10 Feb 2012 04:01:16 -0500 Subject: 400 response HTTP code logged when using Haproxy 'option ssl-hello-chk' Message-ID: <2528168156978e31362c6e20fbed9f54.NginxMailingListEnglish@forum.nginx.org> Hello, We're using haproxy to load balance SSL between two nginx servers. We're using 'option ssl-hello-chk' for health monitoring in haproxy's config. The problem is it's flooding our logs with 400 errors every 2 seconds. Is there a way to stop this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222212,222212#msg-222212 From nginx-forum at nginx.us Fri Feb 10 09:33:14 2012 From: nginx-forum at nginx.us (cyberchriss) Date: Fri, 10 Feb 2012 04:33:14 -0500 Subject: Problem with Client SSL certificates Message-ID: <1fa717830c343681076f37debf4182a8.NginxMailingListEnglish@forum.nginx.org> I tried to configure nginx with client certificates, but only get 400 Bad Request (No required SSL certificate was sent) Here is my Setup: Nginx 0.7.65 on Ubuntu 10.4.3 with php5-fmp 5.3.2-1 I set up a vhost configuration for testing these client certificates: server { listen 443; ssl on; ssl_session_timeout 30m; server_name test.myserver.lan; error_log /var/log/nginx/debug.log debug; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca.crt; ssl_verify_client on; location / { root /var/www/test; fastcgi_pass unix:/tmp/php.sock; fastcgi_param SCRIPT_FILENAME /var/www/test/test.php; fastcgi_param VERIFIED $ssl_client_verify; fastcgi_param DN $ssl_client_s_dn; include fastcgi_params; } } For testing I generated a selfsigned server key and server cert. Later in production this server certificate should be changed to a trusted certificate from an official CA-Authority. This part is working fine. The Problem began with the client certificates. Here are the steps I did: 1. Generate a root ca (only for the client certificates) > openssl genrsa -des3 -out ca.key 4096 > openssl req -new -x509 -days 365 -key ca.key -out ca.crt 2. Generate the self signed client certificate >openssl genrsa -des3 -out client.key 4096 >openssl req -new -key client.key -out client.csr >openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt 3. Convert to PKCS >openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12 4.Import the client.p12 to Firefox I got 400 Bad Request (No required SSL certificate was sent) Serverlog says: 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_do_handshake: -1 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_get_error: 2 2012/02/10 10:13:23 [debug] 30297#0: *8819 post event 08D3FE40 2012/02/10 10:13:23 [debug] 30297#0: *8819 delete posted event 08D3FE40 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL handshake handler: 0 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_do_handshake: 1 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL: TLSv1, cipher: "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http process request line 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_read: -1 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_get_error: 2 2012/02/10 10:13:23 [debug] 30297#0: *8819 post event 08D3FE40 2012/02/10 10:13:23 [debug] 30297#0: *8819 delete posted event 08D3FE40 2012/02/10 10:13:23 [debug] 30297#0: *8819 http process request line 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_read: 434 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_read: -1 2012/02/10 10:13:23 [debug] 30297#0: *8819 SSL_get_error: 2 2012/02/10 10:13:23 [debug] 30297#0: *8819 http request line: "GET / HTTP/1.1" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http uri: "/" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http args: "" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http exten: "" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http process request header line 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Host: test.myserver.lan" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Accept-Encoding: gzip, deflate" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Connection: keep-alive" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Cookie: PHPSESSID=5nn4bei3plftd5r12790kk12n1" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header: "Cache-Control: max-age=0" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http header done 2012/02/10 10:13:23 [info] 30297#0: *8819 client sent no required SSL certificate while reading client request headers, client: 150.102.1.193, server: test.myserver.lan, request: "GET / HTTP/1.1", host: "test.myserver.lan" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http finalize request: 496, "/?" 1 2012/02/10 10:13:23 [debug] 30297#0: *8819 event timer del: 12: 1720368829 2012/02/10 10:13:23 [debug] 30297#0: *8819 http special response: 496, "/?" 2012/02/10 10:13:23 [debug] 30297#0: *8819 http set discard body 2012/02/10 10:13:23 [debug] 30297#0: *8819 HTTP/1.1 400 Bad Request Server: nginx/0.7.65 Date: Fri, 10 Feb 2012 09:13:23 GMT Content-Type: text/html Content-Length: 253 Connection: close To see a little more output from client side: >curl -v -s -k https://test.myserver.lan * About to connect() to port 443 (#0) * Trying 150.102.5.20... connected * Connected to test.myserver.lan (150.102.5.20) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: C=DE; ST=RLP; L=MyCity; O=My company; OU=My Company; CN=test.myserver.lan; emailAddress=admin at myserver.lan * start date: 2012-02-06 10:15:29 GMT * expire date: 2013-02-05 10:15:29 GMT * common name: test.myserver.lan * issuer: C=DE; ST=RLP; L=MyCity; O=My Company; OU=My Company; CN=test.myserver.lan; emailAddress=admin at myserver.lan * SSL certificate verify result: self signed certificate (18), continuing anyway. > GET / HTTP/1.1 > User-Agent: curl/7.19.7 (i486-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: test.myserver.lan > Accept: */* > < HTTP/1.1 400 Bad Request < Server: nginx/0.7.65 < Date: Fri, 10 Feb 2012 09:19:00 GMT < Content-Type: text/html < Content-Length: 253 < Connection: close < 400 No required SSL certificate was sent

400 Bad Request

No required SSL certificate was sent

nginx/0.7.65
* Closing connection #0 * SSLv3, TLS alert, Client hello (1): When I interprete the log files right, there is only a SSL handshake for the server cert authentication?!?!? Has anybody a hint where is the mistake? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222213,222213#msg-222213 From nginx-forum at nginx.us Fri Feb 10 10:05:45 2012 From: nginx-forum at nginx.us (radarek) Date: Fri, 10 Feb 2012 05:05:45 -0500 Subject: File is not refreshed when using proxy_cache_use_stale Message-ID: <42c616fc650b9d3f228f378a7fb89e68.NginxMailingListEnglish@forum.nginx.org> I have following config: location /images { proxy_pass http://my.host; proxy_redirect off; proxy_cache static-cache; proxy_cache_valid any 5m; proxy_cache_valid 404 5m; proxy_cache_use_stale updating invalid_header error timeout http_502; } The idea is that if backend server reply with error/timeout/invalid header it will return stale version and cache it for 5m. Generally it works ok but sometimes I have situation when one of my proxy cache servers (I have 3 of them) returns stale version and will not refresh file until I delete nginx cache files. Here is output from nginx -V: $ nginx -V nginx version: nginx/0.7.65 TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug --with-http_stub_status_module --with-http_flv_module --with-http_ssl_module --with-http_dav_module --with-http_gzip_static_module --with-http_realip_module --with-mail --with-mail_ssl_module --with-ipv6 --add-module=/build/buildd/nginx-0.7.65/modules/nginx-upstream-fair Is it known problem or I am doing something wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222215,222215#msg-222215 From nginx-forum at nginx.us Fri Feb 10 13:32:13 2012 From: nginx-forum at nginx.us (zzor123) Date: Fri, 10 Feb 2012 08:32:13 -0500 Subject: How to Autstart Nginx - php-cgi.exe In-Reply-To: References: Message-ID: <63c0d95ad60b5b3273f503800a3c3c3b.NginxMailingListEnglish@forum.nginx.org> NGXMP^^ http://ncafe.kr/ngxmp.php Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222084,222224#msg-222224 From mdounin at mdounin.ru Fri Feb 10 13:45:28 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 17:45:28 +0400 Subject: File is not refreshed when using proxy_cache_use_stale In-Reply-To: <42c616fc650b9d3f228f378a7fb89e68.NginxMailingListEnglish@forum.nginx.org> References: <42c616fc650b9d3f228f378a7fb89e68.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120210134528.GX67687@mdounin.ru> Hello! On Fri, Feb 10, 2012 at 05:05:45AM -0500, radarek wrote: > I have following config: > > location /images { > proxy_pass http://my.host; > proxy_redirect off; > proxy_cache static-cache; > proxy_cache_valid any 5m; > proxy_cache_valid 404 5m; > proxy_cache_use_stale updating invalid_header error timeout http_502; > } > > The idea is that if backend server reply with error/timeout/invalid > header it will return stale version and cache it for 5m. Generally it > works ok but sometimes I have situation when one of my proxy cache > servers (I have 3 of them) returns stale version and will not refresh > file until I delete nginx cache files. Here is output from nginx -V: > > $ nginx -V > nginx version: nginx/0.7.65 > TLS SNI support enabled > configure arguments: --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/nginx.lock > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug > --with-http_stub_status_module --with-http_flv_module > --with-http_ssl_module --with-http_dav_module > --with-http_gzip_static_module --with-http_realip_module --with-mail > --with-mail_ssl_module --with-ipv6 > --add-module=/build/buildd/nginx-0.7.65/modules/nginx-upstream-fair > > Is it known problem or I am doing something wrong? The above may happen if nginx worker process dies, leaving cache node marked as being updated. Try looking into your logs to see if this is the case. That is: if you have alerts about "worker process exited on signal ..." - you have a problem, and this is expected result. In any case it's good idea to upgrade from long unsupported 0.7.x branch to at least stable 1.0.x. Maxim Dounin From mdounin at mdounin.ru Fri Feb 10 13:48:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 17:48:41 +0400 Subject: 400 response HTTP code logged when using Haproxy 'option ssl-hello-chk' In-Reply-To: <2528168156978e31362c6e20fbed9f54.NginxMailingListEnglish@forum.nginx.org> References: <2528168156978e31362c6e20fbed9f54.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120210134841.GY67687@mdounin.ru> Hello! On Fri, Feb 10, 2012 at 04:01:16AM -0500, trojan2748 wrote: > Hello, > We're using haproxy to load balance SSL between two nginx servers. > We're using 'option ssl-hello-chk' for health monitoring in haproxy's > config. The problem is it's flooding our logs with 400 errors every 2 > seconds. Is there a way to stop this? If you don't want to see 400 errors, you may try something like this (in a default server block): error_page 400 /nolog; location = /error_400 { access_log off; return 400; } Maxim Dounin From appa at perusio.net Fri Feb 10 14:16:42 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 14:16:42 +0000 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <20120209084950.GD67687@mdounin.ru> Message-ID: <87ehu2srhx.wl%appa@perusio.net> On 10 Fev 2012 08h34 WET, quintinpar at gmail.com wrote: > On Thu, Feb 9, 2012 at 2:19 PM, Maxim Dounin > wrote: > >> Hello! >> >> On Thu, Feb 09, 2012 at 12:34:33PM +0530, Quintin Par wrote: >> >>> Picking up an old thread for caching >>> >>> >> http://nginx.2469901.n2.nabble.com/Help-cache-or-not-by-cookie-td3124462.html >>> >>> Igor talks about caching by >>> >>> ?No, currently the single way is: >>> >>> 1) add the cookie in proxy_cache_key >>> >>> proxy_cache_key "http://cacheserver$request_uri $cookie_name"; >>> >>> 2) add "X-Accel-Expires: 0" in response with the cookie.? >>> >>> But from my understanding of ?*X-Accel-Expires? *it expires the >>> cache in the cache repository as given below >>> >>> ?Sets when to expire the file in the internal Nginx cache, if one >>> is >> used.? >>> >>> Does this not mean that when I set cookie and pass >>> ?X-Accel-Expires: 0? >> it >>> expires the cache for the non logged in user too, for that cache >>> key? A >> new >>> cache entry will then have to be created, right? >> >> No. X-Accel-Expires will prevent the particular response from >> being cached, but won't delete existing cache entry. >> > > So what should I do to delete a particular cache entry? The easiest and fastest way is to delete the file. You need to build a function in a programming language (PHP, Lua, Ruby) that computes the key: In PHP: $filename = md5('example.com/foobar'); This is for a key like: $host$request_uri Then depending on the structure if your cache directory each directory is named by taking a number of characters from the end of the string: For the above: $filename is 536d75ab92a8e8778916a971cb1fb4e0 If the cache dir structure is something like: proxy_cache_path /var/cache/nginx/microcache levels=1:2 keys_zone=microcache:5M max_size=1G; ^^^ This means that the path to the file containing the cached page will be: /var/cache/nginx/microcache/0/4e/536d75ab92a8e8778916a971cb1fb4e0 where: 0 is substr($filename, -1, 1) ^ 1 from levels=1:2 4e is substr($filename, -3, 2) ^ 2 from levels=1:2 HTH, --- appa From appa at perusio.net Fri Feb 10 14:36:18 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 14:36:18 +0000 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <87ehu2srhx.wl%appa@perusio.net> References: <20120209084950.GD67687@mdounin.ru> <87ehu2srhx.wl%appa@perusio.net> Message-ID: <87d39msql9.wl%appa@perusio.net> On 10 Fev 2012 14h16 WET, appa at perusio.net wrote: Replying to myself I just remembered that having an embedded variable holding the value of fastcgi_cache_key/proxy_cache_key would be awesome. Since that way using, for example, the embedded Lua module we could easily manage the cache from within Nginx without having to do additional HTTP requests. The Nginx cache is simple: just a bunch of files. So IMHO the cache purging should be as simple as possible and not having to issue additional requests for purging. I guess this is a feature request :) Thx, --- appa From nginx-forum at nginx.us Fri Feb 10 14:45:40 2012 From: nginx-forum at nginx.us (radarek) Date: Fri, 10 Feb 2012 09:45:40 -0500 Subject: File is not refreshed when using proxy_cache_use_stale In-Reply-To: <20120210134528.GX67687@mdounin.ru> References: <20120210134528.GX67687@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > The above may happen if nginx worker process dies, > leaving cache > node marked as being updated. Try looking into > your logs to see > if this is the case. That is: if you have alerts > about "worker > process exited on signal ..." - you have a > problem, and this is > expected result. I looked in the error log and found exactly what you suggested: 2011/08/09 08:50:15 [alert] 25485#0: cache loader process 25488 exited on signal 9 2011/10/16 17:16:36 [alert] 25509#0: worker process 25510 exited on signal 6 2012/01/16 12:48:13 [alert] 9183#0: worker process 9184 exited on signal 6 2012/01/16 14:19:24 [alert] 9183#0: worker process 11897 exited on signal 6 Shouldn't it be treated as a bug? Do you know what thing could caused sending signal 6 to worker process? > > In any case it's good idea to upgrade from long > unsupported 0.7.x > branch to at least stable 1.0.x. > Yeah, I will probably do it in near future. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222215,222234#msg-222234 From nginxyz at mail.ru Fri Feb 10 14:45:41 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 10 Feb 2012 18:45:41 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <20120209084950.GD67687@mdounin.ru> Message-ID: 10 ??????? 2012, 12:35 ?? Quintin Par : > On Thu, Feb 9, 2012 at 2:19 PM, Maxim Dounin wrote: > > On Thu, Feb 09, 2012 at 12:34:33PM +0530, Quintin Par wrote: > > > Picking up an old thread for caching > > > > > > > > > http://nginx.2469901.n2.nabble.com/Help-cache-or-not-by-cookie-td3124462.html > > > > > > Igor talks about caching by > > > > > > ?No, currently the single way is: > > > > > > 1) add the cookie in proxy_cache_key > > > > > > proxy_cache_key "http://cacheserver$request_uri $cookie_name"; > > > > > > 2) add "X-Accel-Expires: 0" in response with the cookie.? > > > > > > But from my understanding of ?*X-Accel-Expires? *it expires the cache in > > > the cache repository as given below > > > > > > ?Sets when to expire the file in the internal Nginx cache, if one is > > used.? > > > > > > Does this not mean that when I set cookie and pass ?X-Accel-Expires: 0? > > it > > > expires the cache for the non logged in user too, for that cache key? A > > new > > > cache entry will then have to be created, right? > > > > No. X-Accel-Expires will prevent the particular response from > > being cached, but won't delete existing cache entry. > > > > So what should I do to delete a particular cache entry? If you really want to avoid nginx_ngx_cache_purge at all costs, you'll have to use something like this to force every POST method request to refresh the cache: proxy_cache zone; # Responses for "/blog" and "/blog?action=edit" requests # are cached under the SAME key proxy_cache_key $scheme$host$uri; # Turn caching on for POST method request responses as well proxy_cache_methods GET HEAD POST; location / { recursive_error_pages on; error_page 409 = @post_and_refresh_cache; # Redirect POST method requests to @post_and_refresh_cache if ($request_method = POST) { return 409; } # Process GET and HEAD method requests by first checking # for a match in the cache, and if that fails, by passing # the request to the backend proxy_pass $scheme://backend; } location @post_and_refresh_cache { proxy_cache_bypass "Never check the cache!"; # Pass the POST method request directly to the backend # and store the response in the cache proxy_pass $scheme://backend; } This generic approach is based on the assumptions that the content on the backend is posted/modified through the same frontend that is proxying and caching it, and that you know how to prevent session-specific information from being leaked through the cache. You've been going on about this for two days now, but you still haven't managed to explain HOW you check whether the content has changed. It's obvious you do know the content has changed, but without sharing the details on how and where the content on the backend is updated, you can't expect a more specific solution. Max From appa at perusio.net Fri Feb 10 15:05:57 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 15:05:57 +0000 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <20120209084950.GD67687@mdounin.ru> Message-ID: <87bop6sp7u.wl%appa@perusio.net> On 10 Fev 2012 14h45 WET, nginxyz at mail.ru wrote: > If you really want to avoid nginx_ngx_cache_purge at all costs, > you'll have to use something like this to force every POST > method request to refresh the cache: > > proxy_cache zone; > > # Responses for "/blog" and "/blog?action=edit" requests > # are cached under the SAME key > proxy_cache_key $scheme$host$uri; > > # Turn caching on for POST method request responses as well > proxy_cache_methods GET HEAD POST; > > location / { > recursive_error_pages on; > error_page 409 = @post_and_refresh_cache; > > # Redirect POST method requests to @post_and_refresh_cache > if ($request_method = POST) { return 409; } > > # Process GET and HEAD method requests by first checking > # for a match in the cache, and if that fails, by passing > # the request to the backend > proxy_pass $scheme://backend; > } > > location @post_and_refresh_cache { > > proxy_cache_bypass "Never check the cache!"; > > # Pass the POST method request directly to the backend > # and store the response in the cache > proxy_pass $scheme://backend; > } > > This generic approach is based on the assumptions that the > content on the backend is posted/modified through the same > frontend that is proxying and caching it, and that you know > how to prevent session-specific information from being leaked > through the cache. > > You've been going on about this for two days now, but you still > haven't managed to explain HOW you check whether the > content has changed. It's obvious you do know the content > has changed, but without sharing the details on how and > where the content on the backend is updated, you can't > expect a more specific solution. > Why all this? The default behavior is for POSTed requests never to be cached. http://wiki.nginx.org/HttpProxyModule#proxy_cache_methods http://wiki.nginx.org/HttpFcgiModule#fastcgi_cache_methods BTW: Valentin, Maxim, Igor, Andrei, et al The proxy_cache_methods and fastcgi_cache_methods are missing on the official docs at http://nginx.org/en/docs/http/ngx_http_proxy_module.html and http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html respectively. --- appa From andrew at nginx.com Fri Feb 10 15:27:36 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 10 Feb 2012 19:27:36 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <87bop6sp7u.wl%appa@perusio.net> References: <20120209084950.GD67687@mdounin.ru> <87bop6sp7u.wl%appa@perusio.net> Message-ID: <43E0B9CE-8EBB-48B5-8B5A-5F27A2CF44BC@nginx.com> On Feb 10, 2012, at 7:05 PM, Ant?nio P. P. Almeida wrote: > On 10 Fev 2012 14h45 WET, nginxyz at mail.ru wrote: > >> [..] >> You've been going on about this for two days now, but you still >> haven't managed to explain HOW you check whether the >> content has changed. It's obvious you do know the content >> has changed, but without sharing the details on how and >> where the content on the backend is updated, you can't >> expect a more specific solution. >> > > Why all this? The default behavior is for POSTed requests never to be > cached. > > http://wiki.nginx.org/HttpProxyModule#proxy_cache_methods > > http://wiki.nginx.org/HttpFcgiModule#fastcgi_cache_methods > > BTW: Valentin, Maxim, Igor, Andrei, et al Right, thanks for spotting this one. We're currently working on sync'ing it up it all up and making the entire thing less confusing. > The proxy_cache_methods and fastcgi_cache_methods are missing on the > official docs at http://nginx.org/en/docs/http/ngx_http_proxy_module.html > and http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html > respectively. > > --- appa From nginx-forum at nginx.us Fri Feb 10 15:31:33 2012 From: nginx-forum at nginx.us (LetsPlay) Date: Fri, 10 Feb 2012 10:31:33 -0500 Subject: Help with debugging - stall on response size is a symptom but what is the cause? Message-ID: <8758fff093b4402245650ec5f4d3c453.NginxMailingListEnglish@forum.nginx.org> Hi, I'm new to nginx and am trying to use it as a front end web server to a java-based Play framework web app, via reverse proxy. But I'm getting strange timeout behaviour and am not sure how to proceed with debugging. The full details are at http://groups.google.com/group/play-framework/browse_thread/thread/c853a579d6015d28/e6cf2a53333aa921?lnk=gst&q=nginx#e6cf2a53333aa921 I've been comparing debug-mode access logs for pass and fail cases, but cannot fathom what nginx is doing. How could I proceed with debugging? I'm working on the assumption that its my system configuration which is causing the problem - I haven't seen any other reference to this kind of problem. I'm running a box with Ubuntu 11.10 x86_64, and nginx was installed from the Ubuntu repo. More than happy to provide logs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222240,222240#msg-222240 From mdounin at mdounin.ru Fri Feb 10 15:33:49 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 19:33:49 +0400 Subject: File is not refreshed when using proxy_cache_use_stale In-Reply-To: References: <20120210134528.GX67687@mdounin.ru> Message-ID: <20120210153349.GB67687@mdounin.ru> Hello! On Fri, Feb 10, 2012 at 09:45:40AM -0500, radarek wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > The above may happen if nginx worker process dies, > > leaving cache > > node marked as being updated. Try looking into > > your logs to see > > if this is the case. That is: if you have alerts > > about "worker > > process exited on signal ..." - you have a > > problem, and this is > > expected result. > > I looked in the error log and found exactly what you suggested: > 2011/08/09 08:50:15 [alert] 25485#0: cache loader process 25488 exited > on signal 9 > 2011/10/16 17:16:36 [alert] 25509#0: worker process 25510 exited on > signal 6 > 2012/01/16 12:48:13 [alert] 9183#0: worker process 9184 exited on signal > 6 > 2012/01/16 14:19:24 [alert] 9183#0: worker process 11897 exited on > signal 6 > > Shouldn't it be treated as a bug? Do you know what thing could caused > sending signal 6 to worker process? There are lots of bugs fixed since 0.7.65 which may cause this. Signal 6 is likely generated by libc due to detected memory corruption. > > In any case it's good idea to upgrade from long > > unsupported 0.7.x > > branch to at least stable 1.0.x. > > > > Yeah, I will probably do it in near future. Yes, please. Maxim Dounin From nginxyz at mail.ru Fri Feb 10 15:41:17 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 10 Feb 2012 19:41:17 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <87bop6sp7u.wlappa@perusio.net> References: <20120209084950.GD67687@mdounin.ru> <87bop6sp7u.wlappa@perusio.net> Message-ID: 10 ??????? 2012, 19:06 ?? Ant?nio P. P. Almeida : > On 10 Fev 2012 14h45 WET, nginxyz at mail.ru wrote: > > > If you really want to avoid nginx_ngx_cache_purge at all costs, > > you'll have to use something like this to force every POST > > method request to refresh the cache: > > > > proxy_cache zone; > > > > # Responses for "/blog" and "/blog?action=edit" requests > > # are cached under the SAME key > > proxy_cache_key $scheme$host$uri; > > > > # Turn caching on for POST method request responses as well > > proxy_cache_methods GET HEAD POST; > > > > location / { > > recursive_error_pages on; > > error_page 409 = @post_and_refresh_cache; > > > > # Redirect POST method requests to @post_and_refresh_cache > > if ($request_method = POST) { return 409; } > > > > # Process GET and HEAD method requests by first checking > > # for a match in the cache, and if that fails, by passing > > # the request to the backend > > proxy_pass $scheme://backend; > > } > > > > location @post_and_refresh_cache { > > > > proxy_cache_bypass "Never check the cache!"; > > > > # Pass the POST method request directly to the backend > > # and store the response in the cache > > proxy_pass $scheme://backend; > > } > > > > This generic approach is based on the assumptions that the > > content on the backend is posted/modified through the same > > frontend that is proxying and caching it, and that you know > > how to prevent session-specific information from being leaked > > through the cache. > > > > You've been going on about this for two days now, but you still > > haven't managed to explain HOW you check whether the > > content has changed. It's obvious you do know the content > > has changed, but without sharing the details on how and > > where the content on the backend is updated, you can't > > expect a more specific solution. > > > > Why all this? The default behavior is for POSTed requests never to be > cached. That's exactly why all that is necessary - to force each POST method request response to be cached, and to refresh the cache immediately. Let's say a blog entry is updated (through a POST method request to "blog?edit=action") - then all those clients who are reading the blog (through GET method requests) will be getting the old, stale version of "/blog" from the cache until the cache validity expires. With my approach, on the other hand, the cache is refreshed immediately on every POST method request (because GET method requests for "/blog" and POST method requests for "/blog?action=edit" are cached under the same key), so all clients get the latest version without the cache validity having to be reduced. Max From mdounin at mdounin.ru Fri Feb 10 15:49:42 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2012 19:49:42 +0400 Subject: Help with debugging - stall on response size is a symptom but what is the cause? In-Reply-To: <8758fff093b4402245650ec5f4d3c453.NginxMailingListEnglish@forum.nginx.org> References: <8758fff093b4402245650ec5f4d3c453.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120210154942.GC67687@mdounin.ru> Hello! On Fri, Feb 10, 2012 at 10:31:33AM -0500, LetsPlay wrote: > Hi, I'm new to nginx and am trying to use it as a front end web server > to a java-based Play framework web app, via reverse proxy. But I'm > getting strange timeout behaviour and am not sure how to proceed with > debugging. > > The full details are at > http://groups.google.com/group/play-framework/browse_thread/thread/c853a579d6015d28/e6cf2a53333aa921?lnk=gst&q=nginx#e6cf2a53333aa921 > > I've been comparing debug-mode access logs for pass and fail cases, but > cannot fathom what nginx is doing. > > How could I proceed with debugging? I'm working on the assumption that > its my system configuration which is causing the problem - I haven't > seen any other reference to this kind of problem. I'm running a box > with Ubuntu 11.10 x86_64, and nginx was installed from the Ubuntu repo. > More than happy to provide logs. Please provide debug log and nginx -V output. Maxim Dounin From appa at perusio.net Fri Feb 10 16:42:51 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 16:42:51 +0000 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <20120209084950.GD67687@mdounin.ru> <87bop6sp7u.wlappa@perusio.net> Message-ID: <87aa4qskqc.wl%appa@perusio.net> On 10 Fev 2012 15h41 WET, nginxyz at mail.ru wrote: > > 10 ??????? 2012, 19:06 ?? Ant?nio P. P. Almeida : >> On 10 Fev 2012 14h45 WET, nginxyz at mail.ru wrote: >> >>> If you really want to avoid nginx_ngx_cache_purge at all costs, >>> you'll have to use something like this to force every POST >>> method request to refresh the cache: >>> >>> proxy_cache zone; >>> >>> # Responses for "/blog" and "/blog?action=edit" requests >>> # are cached under the SAME key >>> proxy_cache_key $scheme$host$uri; >>> >>> # Turn caching on for POST method request responses as well >>> proxy_cache_methods GET HEAD POST; You're saying the POST is a valid method for caching, i.e., POSTed requests get cached. >>> location / { >>> recursive_error_pages on; >>> error_page 409 = @post_and_refresh_cache; >>> >>> # Redirect POST method requests to @post_and_refresh_cache >>> if ($request_method = POST) { return 409; } >>> >>> # Process GET and HEAD method requests by first checking >>> # for a match in the cache, and if that fails, by passing >>> # the request to the backend >>> proxy_pass $scheme://backend; >>> } Here you do an internal redirect to @post_and_refresh_cache via error_page when the request method is a POST. >>> location @post_and_refresh_cache { >>> >>> proxy_cache_bypass "Never check the cache!"; >>> >>> # Pass the POST method request directly to the backend >>> # and store the response in the cache >>> proxy_pass $scheme://backend; >>> } Here you bypass the cache and proxy_pass the request to a backend. AFAICT you're replicating the *default* behaviour which is to not cache in the case of POST requests. Is it not? --- appa From guilherme.e at gmail.com Fri Feb 10 17:08:24 2012 From: guilherme.e at gmail.com (Guilherme) Date: Fri, 10 Feb 2012 15:08:24 -0200 Subject: .htaccess issues Message-ID: I'm starting to use nginx as a proxy/cache (apache back-end), and I've a problem regarding directories that uses .htaccess to restrict access by ip (allow,deny). The first time when a allowed IP access this area (i.e. /downloads), the object is cached, but when a unauthorized IP access the same dir, it gets the object from cache. Is there a way to deal with that? Regards, Guilherme -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Fri Feb 10 17:32:37 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 17:32:37 +0000 Subject: .htaccess issues In-Reply-To: References: Message-ID: <877gzusife.wl%appa@perusio.net> On 10 Fev 2012 17h08 WET, guilherme.e at gmail.com wrote: > I'm starting to use nginx as a proxy/cache (apache back-end), and > I've a problem regarding directories that uses .htaccess to restrict > access by ip (allow,deny). > > The first time when a allowed IP access this area (i.e. /downloads), > the object is cached, but when a unauthorized IP access the same > dir, it gets the object from cache. > > Is there a way to deal with that? At the http level: geo $not_allowed { default 1; 127.0.0.1 0; 192.168.45.0/24 0; } Then add in the cache config: proxy_cache_bypass $not_allowed; --- appa From nginxyz at mail.ru Fri Feb 10 17:47:49 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 10 Feb 2012 21:47:49 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <87aa4qskqc.wlappa@perusio.net> References: <87aa4qskqc.wlappa@perusio.net> Message-ID: 10 ??????? 2012, 20:43 ?? Ant?nio P. P. Almeida : > On 10 Fev 2012 15h41 WET, nginxyz at mail.ru wrote: > > > > > 10 ??????? 2012, 19:06 ?? Ant?nio P. P. Almeida : > >> On 10 Fev 2012 14h45 WET, nginxyz at mail.ru wrote: > >>> # Turn caching on for POST method request responses as well > >>> proxy_cache_methods GET HEAD POST; > > You're saying the POST is a valid method for caching, i.e., POSTed > requests get cached. The proxy_cache_methods directive is used to select the request methods that will have their responses cached. By default, nginx caches only GET and HEAD request method responses, which is why I added the POST request method to the list - so that POST method request responses would be cached as well. This is necessary if you want to make sure the cache is instantly refreshed whenever you update something through a POST method request. Without this directive here, you would update the content on the backend, but the frontend would hold and serve the stale content from the cache until its validity expired instead of the new, updated content that's on the backend. > >>> location / { > >>> recursive_error_pages on; > >>> error_page 409 = @post_and_refresh_cache; > >>> > >>> # Redirect POST method requests to @post_and_refresh_cache > >>> if ($request_method = POST) { return 409; } > >>> > >>> # Process GET and HEAD method requests by first checking > >>> # for a match in the cache, and if that fails, by passing > >>> # the request to the backend > >>> proxy_pass $scheme://backend; > >>> } > > Here you do an internal redirect to @post_and_refresh_cache via > error_page when the request method is a POST. Exactly. > >>> location @post_and_refresh_cache { > >>> > >>> proxy_cache_bypass "Never check the cache!"; > >>> > >>> # Pass the POST method request directly to the backend > >>> # and store the response in the cache > >>> proxy_pass $scheme://backend; > >>> } > > Here you bypass the cache and proxy_pass the request to a backend. Exactly. > AFAICT you're replicating the *default* behaviour which is to not > cache in the case of POST requests. Is it not? The default behaviour is not to cache POST method request responses, but I turned caching of POST method request responses ON, so I had to make sure the cache is bypassed for POST method requests (but not for GET or HEAD method requests!). All POST method requests are passed on to the backend without checking for a match in the cache, but - CONTRARY to the default behavior - all POST method request responses are cached. Without the @post_and_refresh_cache location block and without the proxy_cache_bypass directive, nginx would check the cache and return the content from the cache (put there by a previous GET request response, for example) and would not pass the POST method request on to the backend, which is definitely not what you want in this case. Max From guilherme.e at gmail.com Fri Feb 10 17:57:31 2012 From: guilherme.e at gmail.com (Guilherme) Date: Fri, 10 Feb 2012 15:57:31 -0200 Subject: .htaccess issues In-Reply-To: <877gzusife.wl%appa@perusio.net> References: <877gzusife.wl%appa@perusio.net> Message-ID: Antonio, I'm using apache as a back-end, and I need to keep .htaccess files in apache, because this is a shared web hosting server and it's hosting ~ thousand websites. The problem is that nginx is serving content allowed just for some IPs, for everyone, after this content is cached. On Fri, Feb 10, 2012 at 3:32 PM, Ant?nio P. P. Almeida wrote: > On 10 Fev 2012 17h08 WET, guilherme.e at gmail.com wrote: > > > I'm starting to use nginx as a proxy/cache (apache back-end), and > > I've a problem regarding directories that uses .htaccess to restrict > > access by ip (allow,deny). > > > > The first time when a allowed IP access this area (i.e. /downloads), > > the object is cached, but when a unauthorized IP access the same > > dir, it gets the object from cache. > > > > Is there a way to deal with that? > > At the http level: > > geo $not_allowed { > default 1; > 127.0.0.1 0; > 192.168.45.0/24 0; > } > > Then add in the cache config: > > proxy_cache_bypass $not_allowed; > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Fri Feb 10 18:07:54 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 18:07:54 +0000 Subject: .htaccess issues In-Reply-To: References: <877gzusife.wl%appa@perusio.net> Message-ID: <874nuysgsl.wl%appa@perusio.net> On 10 Fev 2012 17h57 WET, guilherme.e at gmail.com wrote: > Antonio, > > I'm using apache as a back-end, and I need to keep .htaccess files > in apache, because this is a shared web hosting server and it's > hosting ~ thousand websites. > > The problem is that nginx is serving content allowed just for some > IPs, for everyone, after this content is cached. Using the suggested config will fix that. It's backend agnostic. It supports Apache or whatever upstream you're passing to. Post your full config if you want a more specific suggestion. --- appa From adrian at navarro.at Fri Feb 10 18:08:52 2012 From: adrian at navarro.at (=?utf-8?B?QWRyacOhbiBOYXZhcnJv?=) Date: Fri, 10 Feb 2012 18:08:52 +0000 Subject: .htaccess issues In-Reply-To: References: <877gzusife.wl%appa@perusio.net> Message-ID: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Then there is no way you can cache. Pass everything for these dirs (/downloads) and filter htaccess with x-forwarded-for (make sure its being sent by nginx). By pure logic, the only way there would be to set the IP whitelist in the nginx config and check it even for cached items hit. Sent from my BlackBerry -- Adri?n Navarro / (+34) 608 831 094 -----Original Message----- From: Guilherme Sender: nginx-bounces at nginx.orgDate: Fri, 10 Feb 2012 15:57:31 To: Reply-To: nginx at nginx.org Subject: Re: .htaccess issues _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From guilherme.e at gmail.com Fri Feb 10 19:40:25 2012 From: guilherme.e at gmail.com (Guilherme) Date: Fri, 10 Feb 2012 17:40:25 -0200 Subject: .htaccess issues In-Reply-To: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> References: <877gzusife.wl%appa@perusio.net> <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: Adri?n, This would fix the problem, but I don't know the directories that has a .htaccess file with allow/deny. Example: Scenario: nginx (cache/proxy) + back-end apache root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ ./ ../ .htaccess image.jpg root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess allow from x.x.x.x deny from all In the first access (source IP: x.x.x.x) to http://domain.com/restrictedimages/image.jpg, nginx proxy request to apache and cache response. The problem comes in other request from other IP address different from x.x.x.x. Nginx deliver the objects from cache, even if the ip address is not authorized, because nginx doesn't understand .htaccess. I would like to bypass cache in this cases, maybe using proxy_cache_bypass, but I don't know how. Any idea? On Fri, Feb 10, 2012 at 4:08 PM, Adri?n Navarro wrote: > Then there is no way you can cache. Pass everything for these dirs > (/downloads) and filter htaccess with x-forwarded-for (make sure its being > sent by nginx). > > By pure logic, the only way there would be to set the IP whitelist in the > nginx config and check it even for cached items hit. > > > Sent from my BlackBerry > > -- > Adri?n Navarro / (+34) 608 831 094 > > -----Original Message----- > From: Guilherme > Sender: nginx-bounces at nginx.orgDate: Fri, 10 Feb 2012 15:57:31 > To: > Reply-To: nginx at nginx.org > Subject: Re: .htaccess issues > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslan at rockiesoft.com Fri Feb 10 19:51:22 2012 From: ruslan at rockiesoft.com (Ruslan Dautkhanov) Date: Fri, 10 Feb 2012 12:51:22 -0700 Subject: tomcat load balancing Message-ID: Hello, I didn't find nginx module for "fair" Tomcat load balancing. My understanding, that by default nginx load balancing round robins through a list of upstream servers. Is any way for load balance to take into account also current cpu load of that app.server? Thank you, Ruslan -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Fri Feb 10 19:58:55 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 19:58:55 +0000 Subject: .htaccess issues In-Reply-To: References: <877gzusife.wl%appa@perusio.net> <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: <8739aisbnk.wl%appa@perusio.net> On 10 Fev 2012 19h40 WET, guilherme.e at gmail.com wrote: > Adri?n, > > This would fix the problem, but I don't know the directories that > has a .htaccess file with allow/deny. > > Example: > > Scenario: nginx (cache/proxy) + back-end apache > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ ./ > ../ .htaccess image.jpg root at srv1 [~]# cat > /home/domain/public_html/restrictedimages/.htaccess allow from > x.x.x.x deny from all > > In the first access (source IP: x.x.x.x) to > http://domain.com/restrictedimages/image.jpg, nginx proxy request to > apache and cache response. The problem comes in other request from > other IP address different from x.x.x.x. Nginx deliver the objects > from cache, even if the ip address is not authorized, because nginx > doesn't understand .htaccess. > > I would like to bypass cache in this cases, maybe using > proxy_cache_bypass, but I don't know how. Any idea? I already gave you a suggestion. You just need to use a geo directive where you enumerate all the IPs that can **access**. AFAICT this foots the bill. No need to complicate it with headers being passed to the backend. --- appa From nginxyz at mail.ru Fri Feb 10 20:08:45 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 00:08:45 +0400 Subject: .htaccess issues In-Reply-To: References: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: 10 ??????? 2012, 23:40 ?? Guilherme : > This would fix the problem, but I don't know the directories that has a > .htaccess file with allow/deny. > > Example: > > Scenario: nginx (cache/proxy) + back-end apache > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ > ./ ../ .htaccess image.jpg > root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess > allow from x.x.x.x > deny from all > > In the first access (source IP: x.x.x.x) to > http://domain.com/restrictedimages/image.jpg, nginx proxy request to apache > and cache response. The problem comes in other request from other IP > address different from x.x.x.x. Nginx deliver the objects from cache, even > if the ip address is not authorized, because nginx doesn't understand > .htaccess. > > I would like to bypass cache in this cases, maybe using proxy_cache_bypass, > but I don't know how. Any idea? You could use this: proxy_cache_key $scheme$remote_addr$host$$server_port$request_uri; This would make originating IP addresses ($remote_addr) part of the cache key, so different clients would get the correct responses from the cache just as if they were accessing the backend directly, there's no need to bypass the cache at all. Max From appa at perusio.net Fri Feb 10 20:15:45 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 10 Feb 2012 20:15:45 +0000 Subject: tomcat load balancing In-Reply-To: References: Message-ID: <871uq2savi.wl%appa@perusio.net> On 10 Fev 2012 19h51 WET, ruslan at rockiesoft.com wrote: > Hello, > > I didn't find nginx module for "fair" Tomcat load balancing. My > understanding, that by default nginx load balancing round robins > through a list of upstream servers. Is any way for load balance to > take into account also current cpu load of that app.server? Not directly related to CPU load, AFAIK the closest to that is: http://nginx.localdomain.pl/wiki/UpstreamFair --- appa From nginx-forum at nginx.us Fri Feb 10 21:56:34 2012 From: nginx-forum at nginx.us (exvance) Date: Fri, 10 Feb 2012 16:56:34 -0500 Subject: problem compiling nginx from svn source with http_push_module on windows In-Reply-To: <100a114c6d3621c94384efbc6e545ee0.NginxMailingListEnglish@forum.nginx.org> References: <686703d8ec53728350cab4bb65bbfd20.NginxMailingListEnglish@forum.nginx.org> <100a114c6d3621c94384efbc6e545ee0.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sorry for posting on this old thread but I am really hoping that someone was able to get nginx and http_push_module compiled and running on Windows. If you did please let me know. If you didn't, what route did you go instead. I'm very interested! Thanks, Eric Posted at Nginx Forum: http://forum.nginx.org/read.php?2,212047,222266#msg-222266 From nginx at nginxuser.net Fri Feb 10 22:24:58 2012 From: nginx at nginxuser.net (Nginx User) Date: Sat, 11 Feb 2012 01:24:58 +0300 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <87aa4qskqc.wlappa@perusio.net> Message-ID: On 10 February 2012 20:47, Max wrote: > The default behaviour is not to cache POST method request responses, > but I turned caching of POST method request responses ON, so I had > to make sure the cache is bypassed for POST method requests (but > not for GET or HEAD method requests!). All POST method requests > are passed on to the backend without checking for a match in the > cache, but - CONTRARY to the default behavior - all POST method > request responses are cached. > > Without the @post_and_refresh_cache location block and without > the proxy_cache_bypass directive, nginx would check the cache > and return the content from the cache (put there by a previous > GET request response, for example) and would not pass the POST > method request on to the backend, which is definitely not what > you want in this case. Your config would do what the OP wanted but it would be nicer, I think, if the POST request simply invalidated the existing cached content and then for the content to be cached only if and when there is a GET request for that item. I.E., for the cache validity to start when there is a request to view the item. Also avoids using $uri as key which can lead to cache pollution with frontend controllers etc. An internal call to a proxy_purge location could do this ... maybe as a post_action. There will be no need for the proxy_bypass From nginxyz at mail.ru Fri Feb 10 22:54:42 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 02:54:42 +0400 Subject: filter out headers for fastcgi cache In-Reply-To: <4F3458AF.9080507@terabytemedia.com> References: <4F3458AF.9080507@terabytemedia.com> Message-ID: 10 ??????? 2012, 03:39 ?? Michael McCallister : > Greetings, > > I would like to propose that an additional feature for nginx be > considered related to fastcgi caching: the ability to specify headers > that will not be stored in cache, but will be sent to the client when > the cached version is created (sent to the client responsible for > creating the cached copy). If some solution already exists providing > this functionality, my apologies, I was not able to track one down - > currently assuming one does not exist. > > Here is one scenario where such an option would be useful (I am sure > there are others): > > A typical scenario where fastcgi caching can be employed for performance > benefits is when the default version of a page is loaded. By "default", > I mean the client has no prior session data which might result in unique > session specific request elements. In the case of PHP, the presence of > session data is typically determined by checking for the presence of a > "PHPSESSID" cookie. So if this cookie does not exist, then it can be > assumed there is no session - an optimal time to create a cached version > of the page in many scenarios. However, many PHP apps/sites/etc. also > create a session in the event one does not exist (a behavior I assume is > not specific to PHP) - meaning the the response typically contains a > Set-Cookie: PHPSESSID.... header. Nginx's default behavior is not to > cache any page with a Set-Cookie header, and that makes sense as a > default - but lets assume for this example that fastcgi_ignore_headers > Set-Cookie; is in effect and the cached version of the default version > of the page gets created. The problem here is that the cached version > created also has the Set-Cookie header cached as well - which causes > problems for obvious reasons (hands out the same session ID/cookie for > all users). Ideally, we could specify to cache everything except the > specified headers - Set-Cookie in this example. > > Am I the only one who would find this option useful or are there others? > > If this would be found to be useful by others and there is consensus > that such an addition is called for by those with the resources to > implement it, I am not sure if it makes sense to strip the headers prior > storage in cache or when reading from cache - probably whichever is easier. > > Thoughts? http://wiki.nginx.org/NginxHttpProxyModule#proxy_hide_header The proxy_hide_header directive does exactly what you described, the cookies get stored in the cache, but they are not passed back to the client, so this is all you'd need: # Cache responses containing the Set-Cookie header as well fastcgi_ignore_headers Set-Cookie; # Strip the Set-Cookie header from cached content # when passing cached content back to clients proxy_hide_header Set-Cookie; You could also include session cookies in the fastcgi_cache_key to make sure new users get the default cached content, while everyone else gets their session-specific cached content: fastcgi_cache_key "$cookie_PHPSESSID$scheme$request_method$host$server_port$request_uri"; Max From appa at perusio.net Sat Feb 11 00:06:49 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sat, 11 Feb 2012 00:06:49 +0000 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <87aa4qskqc.wlappa@perusio.net> Message-ID: <87vcneqlly.wl%appa@perusio.net> On 10 Fev 2012 17h47 WET, nginxyz at mail.ru wrote: > > The default behaviour is not to cache POST method request responses, > but I turned caching of POST method request responses ON, so I had > to make sure the cache is bypassed for POST method requests (but > not for GET or HEAD method requests!). All POST method requests > are passed on to the backend without checking for a match in the > cache, but - CONTRARY to the default behavior - all POST method > request responses are cached. > > Without the @post_and_refresh_cache location block and without > the proxy_cache_bypass directive, nginx would check the cache > and return the content from the cache (put there by a previous > GET request response, for example) and would not pass the POST > method request on to the backend, which is definitely not what > you want in this case. If what the OP wanted was to distinguish between cached POST and GET request responses then just add $request_method to the cache key. --- appa From quintinpar at gmail.com Sat Feb 11 14:49:06 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 11 Feb 2012 20:19:06 +0530 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: <87vcneqlly.wl%appa@perusio.net> References: <87aa4qskqc.wlappa@perusio.net> <87vcneqlly.wl%appa@perusio.net> Message-ID: Sorry for being late to respond. There is so much that?s being discussed that does not reflect in the wiki ? people like me think the wiki is the canonical document. I like max?s approach but need to cache only in the next GET. Mostly because some XMLHTTP post request under the same location directive will invalidate and recache in this context. But that might not be a candidate to recache. E.g. storing page performance counters after a page?s been loaded. Can that be done with your approach? Just to invalidate? I might sound a bit na?ve here, but all the different proxy_cache mechanisms seems to get a bit confusing. To the reason why I don?t want nginx_ngx_cache_purge: Recompiling and delivering through the yum repo of a large organization is a cumbersome process and raises many flags. -Q On Sat, Feb 11, 2012 at 5:36 AM, Ant?nio P. P. Almeida wrote: > On 10 Fev 2012 17h47 WET, nginxyz at mail.ru wrote: > > > > > The default behaviour is not to cache POST method request responses, > > but I turned caching of POST method request responses ON, so I had > > to make sure the cache is bypassed for POST method requests (but > > not for GET or HEAD method requests!). All POST method requests > > are passed on to the backend without checking for a match in the > > cache, but - CONTRARY to the default behavior - all POST method > > request responses are cached. > > > > Without the @post_and_refresh_cache location block and without > > the proxy_cache_bypass directive, nginx would check the cache > > and return the content from the cache (put there by a previous > > GET request response, for example) and would not pass the POST > > method request on to the backend, which is definitely not what > > you want in this case. > > If what the OP wanted was to distinguish between cached POST and GET > request responses then just add $request_method to the cache key. > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Sat Feb 11 17:51:24 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 21:51:24 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: <87vcneqlly.wl%appa@perusio.net> Message-ID: 11 ??????? 2012, 18:49 ?? Quintin Par : > Sorry for being late to respond. > > There is so much that?s being discussed that does not reflect in the wiki ? > people like me think the wiki is the canonical document. > > I like max?s approach but need to cache only in the next GET. Mostly > because some XMLHTTP post request under the same location directive will > invalidate and recache in this context. But that might not be a candidate > to recache. E.g. storing page performance counters after a page?s been > loaded. > > Can that be done with your approach? Just to invalidate? No. AFAIK, there is no way to cause forced cache invalidation that would remove specific cache entries without using 3rd party modules, such as Piotr Sikora's excellent ngx_cache_purge. You should definitely include that module in your next scheduled nginx upgrade. In the meantime, you could use something like this to force the cache contents for "/furniture/desks/" to be refreshed by sending a request for "/refresh_cache/furniture/desks": # Responses for "/blog" and "/blog?action=edit" requests # are cached under the SAME key proxy_cache_key $scheme$host$uri; location ~ ^/refresh_cache/(.*)$ { # Change the key to match the existing key # of the cache entry you want to refresh proxy_cache_key $scheme$host/$1; proxy_cache_bypass "Never check the cache!"; # Pass the request directly to the backend # and store the response in the cache proxy_pass $scheme://backend/$1; } This is just meant to demonstrate the general approach. Max From nginxyz at mail.ru Sat Feb 11 18:12:13 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 22:12:13 +0400 Subject: Old thread: Cache for non-cookie users and fresh for cookie users In-Reply-To: References: Message-ID: 11 ??????? 2012, 02:25 ?? Nginx User : > On 10 February 2012 20:47, Max wrote: > > The default behaviour is not to cache POST method request responses, > > but I turned caching of POST method request responses ON, so I had > > to make sure the cache is bypassed for POST method requests (but > > not for GET or HEAD method requests!). All POST method requests > > are passed on to the backend without checking for a match in the > > cache, but - CONTRARY to the default behavior - all POST method > > request responses are cached. > > > > Without the @post_and_refresh_cache location block and without > > the proxy_cache_bypass directive, nginx would check the cache > > and return the content from the cache (put there by a previous > > GET request response, for example) and would not pass the POST > > method request on to the backend, which is definitely not what > > you want in this case. > > Your config would do what the OP wanted but it would be nicer, I > think, if the POST request simply invalidated the existing cached > content and then for the content to be cached only if and when there > is a GET request for that item. I.E., for the cache validity to start > when there is a request to view the item. > Also avoids using $uri as key which can lead to cache pollution with > frontend controllers etc. > > An internal call to a proxy_purge location could do this ... maybe as > a post_action. There will be no need for the proxy_bypass > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginxyz at mail.ru Sat Feb 11 18:15:35 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 22:15:35 +0400 Subject: Old thread: Cache for non-cookie users and fresh forcookie users In-Reply-To: References: Message-ID: 11 ??????? 2012, 02:25 ?? Nginx User : > On 10 February 2012 20:47, Max wrote: > > The default behaviour is not to cache POST method request responses, > > but I turned caching of POST method request responses ON, so I had > > to make sure the cache is bypassed for POST method requests (but > > not for GET or HEAD method requests!). All POST method requests > > are passed on to the backend without checking for a match in the > > cache, but - CONTRARY to the default behavior - all POST method > > request responses are cached. > > > > Without the @post_and_refresh_cache location block and without > > the proxy_cache_bypass directive, nginx would check the cache > > and return the content from the cache (put there by a previous > > GET request response, for example) and would not pass the POST > > method request on to the backend, which is definitely not what > > you want in this case. > > Your config would do what the OP wanted but it would be nicer, I > think, if the POST request simply invalidated the existing cached > content and then for the content to be cached only if and when there > is a GET request for that item. I.E., for the cache validity to start > when there is a request to view the item. > Also avoids using $uri as key which can lead to cache pollution with > frontend controllers etc. > > An internal call to a proxy_purge location could do this ... maybe as > a post_action. There will be no need for the proxy_bypass Show us a working configuration that does what you described using stock nginx (version 1.*.*) - without any 3rd party modules. Max From nginxyz at mail.ru Sat Feb 11 18:23:48 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sat, 11 Feb 2012 22:23:48 +0400 Subject: Old thread: Cache for non-cookie users andfresh for cookie users In-Reply-To: <87vcneqlly.wlappa@perusio.net> References: <87vcneqlly.wlappa@perusio.net> Message-ID: 11 ??????? 2012, 04:07 ?? Ant?nio P. P. Almeida : > On 10 Fev 2012 17h47 WET, nginxyz at mail.ru wrote: > > > > > The default behaviour is not to cache POST method request responses, > > but I turned caching of POST method request responses ON, so I had > > to make sure the cache is bypassed for POST method requests (but > > not for GET or HEAD method requests!). All POST method requests > > are passed on to the backend without checking for a match in the > > cache, but - CONTRARY to the default behavior - all POST method > > request responses are cached. > > > > Without the @post_and_refresh_cache location block and without > > the proxy_cache_bypass directive, nginx would check the cache > > and return the content from the cache (put there by a previous > > GET request response, for example) and would not pass the POST > > method request on to the backend, which is definitely not what > > you want in this case. > > If what the OP wanted was to distinguish between cached POST and GET > request responses then just add $request_method to the cache key. That's not what the OP wanted, and that's not what the approach I described does. The OP wants to be able to invalidate cache entries on demand without using 3rd party modules. Since, AFAIK, there's no way to do that without using 3rd party modules, the alternative is to make sure the cache is as fresh as possible. This can be done by making sure POST method requests refresh the appropriate cache entries automatically and/or by having special location blocks for refreshing specific cache entries on demand. Max From dimentiy2k at gmail.com Sun Feb 12 10:59:43 2012 From: dimentiy2k at gmail.com (Dmitry Timoshenko) Date: Sun, 12 Feb 2012 14:59:43 +0400 Subject: How to setup nginx to make php works in site subdirectories Message-ID: <4F379B9F.5090707@gmail.com> Hello, I'm nuewbie in nginx, I've installed and setup nginx & php, everything is fine except .php files located in site's subdirectories are not processed at all. i.e. example.com/download.php works fine, but example.com/stuff/dosomething.php is sent to client as plain text. Please, would any kind soul tell me what should I change to resolve the problem. Thank you. I use those settings. # # example.com # server { listen 80; server_name example.com; access_log /var/log/nginx/example.com.access.log; location / { root /var/www/nginx-default/example.com; index index.html index.htm index.php; } ## Parse all .php file in the /var/www directory location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default/example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } From frumentius at gmail.com Sun Feb 12 11:11:15 2012 From: frumentius at gmail.com (Joe) Date: Sun, 12 Feb 2012 18:11:15 +0700 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <4F379B9F.5090707@gmail.com> References: <4F379B9F.5090707@gmail.com> Message-ID: Hello, Maybe you could use virtual conf, ie: location ~ \.php$ { root /home/example/public_html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/example/public_html$fastcgi_script_name; include fastcgi_params; } Regards, Joe On Sun, Feb 12, 2012 at 5:59 PM, Dmitry Timoshenko wrote: > Hello, > > I'm nuewbie in nginx, I've installed and setup nginx & php, > everything is fine except .php files located in site's subdirectories are > not processed at all. > > i.e. example.com/download.php works fine, but > example.com/stuff/dosomething.**phpis sent to client as plain text. > > Please, would any kind soul tell me what should I change to resolve the > problem. > Thank you. > > I use those settings. > > # > # example.com > # > > server { > listen 80; > server_name example.com; > > access_log /var/log/nginx/example.com.**access.log; > > location / { > root /var/www/nginx-default/example**.com ; > index index.html index.htm index.php; > } > > ## Parse all .php file in the /var/www directory > location ~ .php$ { > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_pass backend; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME /var/www/nginx-default/example** > .com $fastcgi_script_name; > include fastcgi_params; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_intercept_errors on; > fastcgi_ignore_client_abort off; > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > } > > ## Disable viewing .htaccess & .htpassword > location ~ /\.ht { > deny all; > } > } > > upstream backend { > server 127.0.0.1:9000; > } > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 12 11:33:09 2012 From: nginx-forum at nginx.us (dimentiy) Date: Sun, 12 Feb 2012 06:33:09 -0500 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: References: Message-ID: <31397c4fc28b0905977fcdabea6bd974.NginxMailingListEnglish@forum.nginx.org> Hello, Joe, I've tried to use settings from your post, unfortunately nothing changed. Scripts in site's root are processed, but in subdirectories are not ( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222301,222303#msg-222303 From frumentius at gmail.com Sun Feb 12 12:21:35 2012 From: frumentius at gmail.com (Joe) Date: Sun, 12 Feb 2012 19:21:35 +0700 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <31397c4fc28b0905977fcdabea6bd974.NginxMailingListEnglish@forum.nginx.org> References: <31397c4fc28b0905977fcdabea6bd974.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello Dmitry, Please try to put the subdirectory name behind / Regards, Joe On Sun, Feb 12, 2012 at 6:33 PM, dimentiy wrote: > Hello, Joe, I've tried to use settings from your post, unfortunately > nothing changed. > Scripts in site's root are processed, but in subdirectories are not ( > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222301,222303#msg-222303 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Sun Feb 12 12:53:10 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sun, 12 Feb 2012 16:53:10 +0400 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <4F379B9F.5090707@gmail.com> References: <4F379B9F.5090707@gmail.com> Message-ID: 12 ??????? 2012, 15:00 ?? Dmitry Timoshenko : > Hello, > > I'm nuewbie in nginx, I've installed and setup nginx & php, > everything is fine except .php files located in site's subdirectories > are not processed at all. > > i.e. example.com/download.php works fine, but > example.com/stuff/dosomething.php is sent to client as plain text. > > Please, would any kind soul tell me what should I change to resolve the > problem. > Thank you. > > I use those settings. > > # > # example.com > # > > server { > listen 80; > server_name example.com; > > access_log /var/log/nginx/example.com.access.log; > > location / { > root /var/www/nginx-default/example.com; > index index.html index.htm index.php; > } > > ## Parse all .php file in the /var/www directory > location ~ .php$ { > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_pass backend; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /var/www/nginx-default/example.com$fastcgi_script_name; > include fastcgi_params; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_intercept_errors on; > fastcgi_ignore_client_abort off; > fastcgi_connect_timeout 60; > fastcgi_send_timeout 180; > fastcgi_read_timeout 180; > fastcgi_buffer_size 128k; > fastcgi_buffers 4 256k; > fastcgi_busy_buffers_size 256k; > fastcgi_temp_file_write_size 256k; > } > > ## Disable viewing .htaccess & .htpassword > location ~ /\.ht { > deny all; > } > } > > upstream backend { > server 127.0.0.1:9000; > } That isn't your complete configuration now, is it? Another location block (which you haven't posted) seems to be matching your subdirectory requests. Add this to your server config and then check your error log to see what's matching your requests - look for log entries that look like "using configuration" to find the matching location block. error_log /var/log/nginx/example.com.error.log debug; root /var/www/nginx-default/example.com; Always set the root directory inside the server configuration block, otherwise it will be reset to the --prefix configuration argument that nginx was compiled with (run "nginx -V" to find out yours). Max From francis at daoine.org Sun Feb 12 12:59:35 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 12 Feb 2012 12:59:35 +0000 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <31397c4fc28b0905977fcdabea6bd974.NginxMailingListEnglish@forum.nginx.org> References: <31397c4fc28b0905977fcdabea6bd974.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120212125935.GC22076@craic.sysops.org> On Sun, Feb 12, 2012 at 06:33:09AM -0500, dimentiy wrote: Hi there, > Scripts in site's root are processed, but in subdirectories are not ( Using your configuration, I'm unable to reproduce your reported output. There are some tidy-up fixes that could be done to your configuration, but what you have should be working already. I suspect that either it *is* working but your browser cache is hiding that from you; or you have an extra location{} block that means that your "\.php$" one isn't being used. Suggestion: create a file /var/www/nginx-default/example.com/stuff/new.php with the contents: == stuff/new.php == and look at the result of getting example.com/stuff/new.php. That should remove the "browser cache" possibility. If it is still not what you expect, then something like the debug log should show you which location{} is being used for this request. Good luck with it, f -- Francis Daly francis at daoine.org From guilherme.e at gmail.com Sun Feb 12 15:32:51 2012 From: guilherme.e at gmail.com (Guilherme) Date: Sun, 12 Feb 2012 13:32:51 -0200 Subject: .htaccess issues In-Reply-To: <8739aisbnk.wl%appa@perusio.net> References: <877gzusife.wl%appa@perusio.net> <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> <8739aisbnk.wl%appa@perusio.net> Message-ID: On Fri, Feb 10, 2012 at 5:58 PM, Ant?nio P. P. Almeida wrote: > On 10 Fev 2012 19h40 WET, guilherme.e at gmail.com wrote: > > > Adri?n, > > > > This would fix the problem, but I don't know the directories that > > has a .htaccess file with allow/deny. > > > > Example: > > > > Scenario: nginx (cache/proxy) + back-end apache > > > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ ./ > > ../ .htaccess image.jpg root at srv1 [~]# cat > > /home/domain/public_html/restrictedimages/.htaccess allow from > > x.x.x.x deny from all > > > > In the first access (source IP: x.x.x.x) to > > http://domain.com/restrictedimages/image.jpg, nginx proxy request to > > apache and cache response. The problem comes in other request from > > other IP address different from x.x.x.x. Nginx deliver the objects > > from cache, even if the ip address is not authorized, because nginx > > doesn't understand .htaccess. > > > > I would like to bypass cache in this cases, maybe using > > proxy_cache_bypass, but I don't know how. Any idea? > > I already gave you a suggestion. You just need to use a geo directive > where you enumerate all the IPs that can **access**. > > AFAICT this foots the bill. No need to complicate it with headers > being passed to the backend. > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Antonio, geo directive would be a great idea if I know the IPs that can access the website (or directory), but the application is not mine, and the customer can change this list (in .htaccess). In this case the ip list in nginx would be outdated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guilherme.e at gmail.com Sun Feb 12 15:37:33 2012 From: guilherme.e at gmail.com (Guilherme) Date: Sun, 12 Feb 2012 13:37:33 -0200 Subject: .htaccess issues In-Reply-To: References: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: On Fri, Feb 10, 2012 at 6:08 PM, Max wrote: > > 10 ??????? 2012, 23:40 ?? Guilherme : > > This would fix the problem, but I don't know the directories that has a > > .htaccess file with allow/deny. > > > > Example: > > > > Scenario: nginx (cache/proxy) + back-end apache > > > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ > > ./ ../ .htaccess image.jpg > > root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess > > allow from x.x.x.x > > deny from all > > > > In the first access (source IP: x.x.x.x) to > > http://domain.com/restrictedimages/image.jpg, nginx proxy request to > apache > > and cache response. The problem comes in other request from other IP > > address different from x.x.x.x. Nginx deliver the objects from cache, > even > > if the ip address is not authorized, because nginx doesn't understand > > .htaccess. > > > > I would like to bypass cache in this cases, maybe using > proxy_cache_bypass, > > but I don't know how. Any idea? > > You could use this: > > proxy_cache_key $scheme$remote_addr$host$$server_port$request_uri; > > This would make originating IP addresses ($remote_addr) part of > the cache key, so different clients would get the correct responses > from the cache just as if they were accessing the backend directly, > there's no need to bypass the cache at all. > > Max > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Max, good idea, but in the other requests, that I want to cache responses, the cache size will grow too fast, because the same object will be cached a lot of times, cause the ip adress is in the cache key (one cache entry per IP). -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Feb 12 16:49:24 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 12 Feb 2012 16:49:24 +0000 Subject: .htaccess issues In-Reply-To: References: Message-ID: <20120212164924.GD22076@craic.sysops.org> On Fri, Feb 10, 2012 at 03:08:24PM -0200, Guilherme wrote: Hi there, > The first time when a allowed IP access this area (i.e. /downloads), the > object is cached, but when a unauthorized IP access the same dir, it gets > the object from cache. > > Is there a way to deal with that? Unfortunately, the only answer is "fix your application". If you (apache) want the content not to be cached, you must set the "please do not cache" http headers. Any proxy between the client and the server can cache the content, and serve it to other clients, unless the origin server marks it uncacheable. This isn't nginx-specific. See, for example, http://httpd.apache.org/docs/2.2/mod/mod_cache.html and http://httpd.apache.org/docs/2.2/caching.html#security for apache's notes on the same topic. If you can't configure apache to correctly declare what is and isn't cacheable, then you must decide yourself which responses nginx should (or should not) cache. After you've decided which they are, you can configure nginx to match. If you can't reliably tell nginx what is cacheable, the only safe option is to cache nothing in nginx. But you'll (probably) have to address the same issue for any proxy between the client and the server. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Feb 12 17:48:42 2012 From: nginx-forum at nginx.us (dimentiy) Date: Sun, 12 Feb 2012 12:48:42 -0500 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: References: Message-ID: <155ea3f032f0e97e97464ef86ca73197.NginxMailingListEnglish@forum.nginx.org> Hello, Joe, behind what /? I've tried different variants. But I really misunderstand how it should be. The goal is to make nginx process .php files whereever they located begining from site's root folder. This settings works fine with scripts in site's root. location ~ \.php$ { root /var/www/nginx-default/example.com; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default/example.com$fastcgi_script_name; include fastcgi_params; } This redirects 'example.com/popups/popup_on_enter.php' to php-fpm but it returns 'File not found'. location /popups/popup_on_enter.php { root /var/www/nginx-default/example.com/popups; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default/example.com/popups$fastcgi_script_name; include fastcgi_params; } I suppose there sould be variant with one 'location' directive processing all the files in /var/www/nginx-default/example.com. But due to my little experience I can't realize how to reach the goal. Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222301,222314#msg-222314 From nginx-forum at nginx.us Sun Feb 12 18:04:30 2012 From: nginx-forum at nginx.us (dimentiy) Date: Sun, 12 Feb 2012 13:04:30 -0500 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <20120212125935.GC22076@craic.sysops.org> References: <20120212125935.GC22076@craic.sysops.org> Message-ID: <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> Hello, Francis, Yes, I have another 'location' but within the another 'server' Exactly I have four 'servers' with different root. Like 'one.example.com', 'two.example.com', 'three.example.com', 'four.example.com'. Each of them are similar and looks like I posted above as I said the difference is only in root (location / { root xxx; }). You say that verything is fine and should already work. Sounds cool. I'm dummy with nginx and php-fpm. Could you tell, please, how to enable debug log? Thank you. Francis Daly Wrote: ------------------------------------------------------- > On Sun, Feb 12, 2012 at 06:33:09AM -0500, dimentiy > wrote: > > Hi there, > > > Scripts in site's root are processed, but in > subdirectories are not ( > > Using your configuration, I'm unable to reproduce > your reported output. > > There are some tidy-up fixes that could be done to > your configuration, > but what you have should be working already. > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222301,222315#msg-222315 From nginx-forum at nginx.us Sun Feb 12 18:14:22 2012 From: nginx-forum at nginx.us (dimentiy) Date: Sun, 12 Feb 2012 13:14:22 -0500 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> References: <20120212125935.GC22076@craic.sysops.org> <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0f6b460f6db4715b84ee32bbdffa27ae.NginxMailingListEnglish@forum.nginx.org> Hello, Francis, I found out how to enable it error_log path/to/log; Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222301,222316#msg-222316 From francis at daoine.org Sun Feb 12 20:09:54 2012 From: francis at daoine.org (Francis Daly) Date: Sun, 12 Feb 2012 20:09:54 +0000 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> References: <20120212125935.GC22076@craic.sysops.org> <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120212200954.GE22076@craic.sysops.org> On Sun, Feb 12, 2012 at 01:04:30PM -0500, dimentiy wrote: Hi there, > Yes, I have another 'location' but within the another 'server' If you have no other location{} in the server{} block that is being used, and you can see from the access log that the browser is actually making the request and not using a cached version, then I'm afraid that I have no idea what your system is doing. > Exactly I have four 'servers' with different root. Like > 'one.example.com', 'two.example.com', 'three.example.com', > 'four.example.com'. > Each of them are similar and looks like I posted above as I said the > difference is only in root (location / { root xxx; }). When I use a config like what you posted, I see no problem. (Generally, you don't put "root" inside a location{} block unless you know why you are doing it. Leave it inside the server{} block and things are clearer.) Can you provide an exact config file that shows the problem, that I can copy-paste and demonstrate the problem myself? Usually, if you begin with a simple config file and just add a few lines at a time, it will become clear at what point things stopped working as you expect. > You say that verything is fine and should already work. Sounds cool. > I'm dummy with nginx and php-fpm. Using this config file === events { worker_connections 1024; debug_connection 127.0.0.1; } http { server { listen 8000; server_name one.example.com; root /usr/local/nginx/one; index index.html index.htm index.php; location / { } location ~ php { fastcgi_pass backend; include fastcgi.conf; } } server { listen 8000; server_name two.example.com; root /usr/local/nginx/two; index index.html index.htm index.php; location ~ php { fastcgi_pass backend; include fastcgi.conf; } } upstream backend { server unix:php.sock; } } === I get the expected results from each of curl -i -H Host:\ one.example.com http://localhost:8000/ curl -i -H Host:\ one.example.com http://localhost:8000/get.php curl -i -H Host:\ one.example.com http://localhost:8000/sub/get.php curl -i -H Host:\ two.example.com http://localhost:8000/ curl -i -H Host:\ two.example.com http://localhost:8000/sub/ curl -i -H Host:\ two.example.com http://localhost:8000/sub/index.php curl -i -H Host:\ two.example.com http://localhost:8000/sub/get.php where I see (respectively) the content of /usr/local/nginx/one/index.htm, and then the php-processed output of /usr/local/nginx/one/get.php, /usr/local/nginx/one/sub/get.php, /usr/local/nginx/two/index.php, /usr/local/nginx/two/sub/index.php (twice), and /usr/local/nginx/two/sub/get.php, The only difference between include fastcgi.conf; and include fastcgi_params; is that the former includes the line fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; so that it doesn't have to be added separately. (Note also that the "~ php" locations are probably not exactly what you want on a live site.) If you still have a problem with your configuration, please include the usual debug details: * what did you do * what did you see * what did you expect to see and someone may have a better chance of reproducing what you are seeing. > Could you tell, please, how to enable debug log? http://nginx.org/en/docs/debugging_log.html http://wiki.nginx.org/Debugging Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Feb 12 20:28:03 2012 From: nginx-forum at nginx.us (dimentiy) Date: Sun, 12 Feb 2012 15:28:03 -0500 Subject: How to setup nginx to make php works in site subdirectories In-Reply-To: <0f6b460f6db4715b84ee32bbdffa27ae.NginxMailingListEnglish@forum.nginx.org> References: <20120212125935.GC22076@craic.sysops.org> <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> <0f6b460f6db4715b84ee32bbdffa27ae.NginxMailingListEnglish@forum.nginx.org> Message-ID: <26df77d5a11b02b0b077288d2cb9c515.NginxMailingListEnglish@forum.nginx.org> Thank you every body, people! I'm quite stupid... I entirly can't think... There was short tags disabled, all the scripts use short tags, but only the one I tested locating in root directory had References: <20120212125935.GC22076@craic.sysops.org> <876303434145f7287b7fe3538a47c6d1.NginxMailingListEnglish@forum.nginx.org> <0f6b460f6db4715b84ee32bbdffa27ae.NginxMailingListEnglish@forum.nginx.org> <26df77d5a11b02b0b077288d2cb9c515.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120212215232.GF22076@craic.sysops.org> On Sun, Feb 12, 2012 at 03:28:03PM -0500, dimentiy wrote: Hi there, > I'm quite stupid... I entirly can't think... There was short tags > disabled, all the scripts use short tags, but only the one I tested > locating in root directory had References: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: 12 ??????? 2012, 19:37 ?? Guilherme : > On Fri, Feb 10, 2012 at 6:08 PM, Max wrote: > > > > > 10 ??????? 2012, 23:40 ?? Guilherme : > > > This would fix the problem, but I don't know the directories that has a > > > .htaccess file with allow/deny. > > > > > > Example: > > > > > > Scenario: nginx (cache/proxy) + back-end apache > > > > > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ > > > ./ ../ .htaccess image.jpg > > > root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess > > > allow from x.x.x.x > > > deny from all > > > > > > In the first access (source IP: x.x.x.x) to > > > http://domain.com/restrictedimages/image.jpg, nginx proxy request to > > apache > > > and cache response. The problem comes in other request from other IP > > > address different from x.x.x.x. Nginx deliver the objects from cache, > > even > > > if the ip address is not authorized, because nginx doesn't understand > > > .htaccess. > > > > > > I would like to bypass cache in this cases, maybe using > > proxy_cache_bypass, > > > but I don't know how. Any idea? > > > > You could use this: > > > > proxy_cache_key $scheme$remote_addr$host$$server_port$request_uri; > > > > This would make originating IP addresses ($remote_addr) part of > > the cache key, so different clients would get the correct responses > > from the cache just as if they were accessing the backend directly, > > there's no need to bypass the cache at all. > > > > Max > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > Max, good idea, but in the other requests, that I want to cache responses, > the cache size will grow too fast, because the same object will be cached a > lot of times, cause the ip adress is in the cache key (one cache entry per > IP). I suggest you recompile your nginx with the Lua module included: http://wiki.nginx.org/HttpLuaModule Then you could use something like this: proxy_cache_key $scheme$host$server_port$uri; location / { access_by_lua ' local res = ngx.location.capture("/test_access" .. ngx.var.request_uri) if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR) '; proxy_set_header Host $host:$proxy_port; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://backend/; } location /test_access/ { internal; proxy_method HEAD; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache_bypass "Always bypass the cache!"; proxy_no_cache "Never store the response in the cache!"; proxy_pass http://backend/; } The access_by_lua block initiates a local non-blocking subrequest for "/test_access/$request_uri", which is handled by the /test_access/ location block as follows: the request method is set to HEAD instead of the original POST or GET request in order to find out whether the original request would be allowed or denied without the overhead of having to transfer any files. The X-Forwarded-For header is also reset to the originating IP address. Any X-Forwarded-For headers set by clients are removed and replaced, so the backend server can rely on this header for IP-based access control. The Apache mod_remoteip module can be configured to make sure Apache always uses the originating IP address from the X-Forwarded-For header: http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html The next two directives make sure that the cache is always bypassed and that no HEAD request responses are cached because you want to make sure you have the latest access control information. The original request URI is then passed on to the backend (note the trailing slash), and the response is captured in the res variable inside the access_by_lua block. If the subrequest was completed with the HTTP OK status code, access is allowed, so after returning from the access_by_lua block the Host and X-Forwarded-For headers are set and the original request is processed - first the cache is checked and if there is no matching entry the request is passed on to the backend server and the response is cached under such a key that makes it possible for a single copy of a file to be stored in the cache. If the subrequest is completed with the HTTP FORBIDDEN status code or any other error, the access_by_lua block is exited in a way that terminates further processing and returns the status code. There you go, thanks to the speed and non-blocking nature of Lua, you now have a solution that causes minimal overhead by allowing you to take full advantage of both caching and IP-based access control. Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Mon Feb 13 01:08:04 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 13 Feb 2012 05:08:04 +0400 Subject: .htaccess issues In-Reply-To: References: <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> Message-ID: 12 ??????? 2012, 19:37 ?? Guilherme : > On Fri, Feb 10, 2012 at 6:08 PM, Max wrote: > > > > > 10 ??????? 2012, 23:40 ?? Guilherme : > > > This would fix the problem, but I don't know the directories that has a > > > .htaccess file with allow/deny. > > > > > > Example: > > > > > > Scenario: nginx (cache/proxy) + back-end apache > > > > > > root at srv1 [~]# ls -a /home/domain/public_html/restrictedimages/ > > > ./ ../ .htaccess image.jpg > > > root at srv1 [~]# cat /home/domain/public_html/restrictedimages/.htaccess > > > allow from x.x.x.x > > > deny from all > > > > > > In the first access (source IP: x.x.x.x) to > > > http://domain.com/restrictedimages/image.jpg, nginx proxy request to > > apache > > > and cache response. The problem comes in other request from other IP > > > address different from x.x.x.x. Nginx deliver the objects from cache, > > even > > > if the ip address is not authorized, because nginx doesn't understand > > > .htaccess. > > > > > > I would like to bypass cache in this cases, maybe using > > proxy_cache_bypass, > > > but I don't know how. Any idea? > > > > You could use this: > > > > proxy_cache_key $scheme$remote_addr$host$$server_port$request_uri; > > > > This would make originating IP addresses ($remote_addr) part of > > the cache key, so different clients would get the correct responses > > from the cache just as if they were accessing the backend directly, > > there's no need to bypass the cache at all. > > > > Max > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > Max, good idea, but in the other requests, that I want to cache responses, > the cache size will grow too fast, because the same object will be cached a > lot of times, cause the ip adress is in the cache key (one cache entry per > IP). I suggest you recompile your nginx with the Lua module included: http://wiki.nginx.org/HttpLuaModule Then you could use something like this: proxy_cache_key $scheme$host$server_port$uri; location / { access_by_lua ' local res = ngx.location.capture("/test_access" .. ngx.var.request_uri) if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR) '; proxy_set_header Host $host:$proxy_port; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://backend/; } location /test_access/ { internal; proxy_method HEAD; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache_bypass "Always bypass the cache!"; proxy_no_cache "Never store the response in the cache!"; proxy_pass http://backend/; } The access_by_lua block initiates a local non-blocking subrequest for "/test_access/$request_uri", which is handled by the /test_access/ location block as follows: the request method is set to HEAD instead of the original POST or GET request in order to find out whether the original request would be allowed or denied without the overhead of having to transfer any files. The X-Forwarded-For header is also reset to the originating IP address. Any X-Forwarded-For headers set by clients are removed and replaced, so the backend server can rely on this header for IP-based access control. The Apache mod_remoteip module can be configured to make sure Apache always uses the originating IP address from the X-Forwarded-For header: http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html The next two directives make sure that the cache is always bypassed and that no HEAD request responses are cached because you want to make sure you have the latest access control information. The original request URI is then passed on to the backend (note the trailing slash), and the response is captured in the res variable inside the access_by_lua block. If the subrequest was completed with the HTTP OK status code, access is allowed, so after returning from the access_by_lua block the Host and X-Forwarded-For headers are set and the original request is processed - first the cache is checked and if there is no matching entry the request is passed on to the backend server and the response is cached under such a key that makes it possible for a single copy of a file to be stored in the cache. If the subrequest is completed with the HTTP FORBIDDEN status code or any other error, the access_by_lua block is exited in a way that terminates further processing and returns the status code. There you go, thanks to the speed and non-blocking nature of Lua, you now have a solution that causes minimal overhead by allowing you to take full advantage of both caching and IP-based access control. Max From nginx-forum at nginx.us Mon Feb 13 02:21:35 2012 From: nginx-forum at nginx.us (LetsPlay) Date: Sun, 12 Feb 2012 21:21:35 -0500 Subject: Help with debugging - stall on response size is a symptom but what is the cause? In-Reply-To: <20120210154942.GC67687@mdounin.ru> References: <20120210154942.GC67687@mdounin.ru> Message-ID: > nginx -V nginx: nginx version: nginx/1.0.5 nginx: TLS SNI support enabled nginx: configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-upstream-fair Logs: http://pastebin.com/W4Tf0vcF Here is a summary again: Details: System: Ubuntu 11.10, Play 1.2.4 unzipped and installed, Nginx 1.0.5 via apt-get Network: www.myApp.com points to a static IP, which is a modem/router with LAN address 192.168.1.1 wired to my box 192.168.1.13. The router uses the NAT rule "forward port 80 to 192.168.1.13" (my box) Note that local access (http://localhost:9027) works fine in all cases Case 1: WORKS vanilla "play new myApp" works fine at http://localhost:9027 Case 2: FAIL vanilla "play new myApp" times out at http://www.myApp.com, nothing received by browser Case 3: WORKS hacked index.html with short toy css, works via www.myApp.com Case 4: FAIL Case 3 with lengthened css, long delay via www.myApp.com, css not used Case 5: WORKS Case 4 but with nginx "proxy_buffering off;", works via www.myApp.com Case 6: FAIL Case 5 with an extra css comment line, long delay via www.myApp.com, css not used Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222240,222326#msg-222326 From quintinpar at gmail.com Mon Feb 13 07:41:13 2012 From: quintinpar at gmail.com (Quintin Par) Date: Mon, 13 Feb 2012 13:11:13 +0530 Subject: =?UTF-8?Q?Sharing_rate_limiting_data_between_multiple_nginx_LB=E2=80=99s?= Message-ID: I have multiple nginx machines running and proxy LB through a round robin DNS mechanism. I do rate limiting as follows limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; location / { limit_req zone=pw burst=5 nodelay; But this is per machine. Can this data be shared between the load balancers so that rate limiting is global and I can scale out. -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Mon Feb 13 07:43:48 2012 From: quintinpar at gmail.com (Quintin Par) Date: Mon, 13 Feb 2012 13:13:48 +0530 Subject: =?UTF-8?Q?Re=3A_Sharing_rate_limiting_data_between_multiple_nginx_LB?= =?UTF-8?Q?=E2=80=99s?= In-Reply-To: References: Message-ID: Correction: I have multiple nginx machines running proxy and LB's through a round robin DNS mechanism. On Mon, Feb 13, 2012 at 1:11 PM, Quintin Par wrote: > I have multiple nginx machines running and proxy LB through a round robin > DNS mechanism. > > I do rate limiting as follows > > limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; > > location / { > > limit_req zone=pw burst=5 nodelay; > > But this is per machine. Can this data be shared between the load > balancers so that rate limiting is global and I can scale out. > > -Quintin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Feb 13 12:46:55 2012 From: r at roze.lv (Reinis Rozitis) Date: Mon, 13 Feb 2012 14:46:55 +0200 Subject: try_files and filter modules Message-ID: Hello, is there a reason (or workaround) why try_files doesn't work with filter modules (in my case its image_filter)? For example: location ~ (.*)/small_(.*) { try_files $uri $1/medium_$2 $1/large_$2; image_filter crop 175 175; } The crop filter is never applied. The idea is to generate the thumbnail in a chain fashion - first detect if there is already an image in needed size/name then try to resize it from medium sized version and if it doesn't exist use the original source file. I can rewrite it to something like: location ~ /small_ { try_files $uri @smallresize; } location @smallresize { internal; rewrite (.*)/small_(.*) $1/large_$2 break; image_filter crop 175 175; } But obviously it misses the "medium" step. Also wouldn't like to use multiple 'if's. p.s. it also seems that try_files doesn't work with more than one @named location - every except the last one is checked ( stat() ) as physical files. rr From ne at vbart.ru Mon Feb 13 13:37:45 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 13 Feb 2012 17:37:45 +0400 Subject: try_files and filter modules In-Reply-To: References: Message-ID: <201202131737.45380.ne@vbart.ru> On Monday 13 February 2012 16:46:55 Reinis Rozitis wrote: > Hello, > is there a reason (or workaround) why try_files doesn't work with filter > modules (in my case its image_filter)? > > > For example: > > location ~ (.*)/small_(.*) { > try_files $uri $1/medium_$2 $1/large_$2; > image_filter crop 175 175; > } > > The crop filter is never applied. [...] Probably, because $uri and $1/medium_$2 doesn't exist or smaller than your crop. location ~ (.*)/small_(.*) { try_files $uri $1/medium_$2 $1/large_$2 =404; image_filter crop 175 175; } wbr, Valentin V. Bartenev From r at roze.lv Mon Feb 13 13:55:35 2012 From: r at roze.lv (Reinis Rozitis) Date: Mon, 13 Feb 2012 15:55:35 +0200 Subject: try_files and filter modules In-Reply-To: <201202131737.45380.ne@vbart.ru> References: <201202131737.45380.ne@vbart.ru> Message-ID: > try_files $uri $1/medium_$2 $1/large_$2 =404; Hmm, thx the '= 404' is all that I missed. p.s. the files indeed didn't exist (expected) but the request always returned the source/original file (without cropping it), but now with the additional 404 it works nicely. rr From ne at vbart.ru Mon Feb 13 14:07:31 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 13 Feb 2012 18:07:31 +0400 Subject: try_files and filter modules In-Reply-To: References: <201202131737.45380.ne@vbart.ru> Message-ID: <201202131807.31651.ne@vbart.ru> On Monday 13 February 2012 17:55:35 Reinis Rozitis wrote: [...] > p.s. the files indeed didn't exist (expected) but the request always > returned the source/original file (without cropping it), but now with the > additional 404 it works nicely. > It was handled by another location. The last part in the try_files directive always does internal redirect (to $1/large_$2 in your case, so then it didn't catch by "location ~ (.*)/small_(.*)"). wbr, Valentin V. Bartenev From baishen.lists at gmail.com Mon Feb 13 16:08:25 2012 From: baishen.lists at gmail.com (Bai Shen) Date: Mon, 13 Feb 2012 11:08:25 -0500 Subject: Binding nginx to a single interface In-Reply-To: <20120209233420.GT67687@mdounin.ru> References: <20120209175433.GO67687@mdounin.ru> <20120209233420.GT67687@mdounin.ru> Message-ID: But I'm not defining an ip server_name. Isn't nginx listening for server_names? Right now I have example.com rewriting to www.example.com They both listen on 10.1.2.3. Previously, I could connect to 10.1.2.3 and it would redirect me to the web server. Now when I connect to 10.1.2.3 it rewrites the url to www.example.com and because I'm internal, that never resolves. How do I set nginx to do the redirect without the rewrite? On Thu, Feb 9, 2012 at 6:34 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 09, 2012 at 01:06:14PM -0500, Bai Shen wrote: > > > They do. > > > > However, I do have some weird behaviour. I have the server_name set to > > www.example.com and that correctly connects me to my web server. But > if I > > type in 10.1.2.3, that connects me to my web server as well, even though > I > > don't have a default rule setup. > > > > When I go to 10.1.2.4 I get a "Welcome to nginx!" page. > > When selecting server{} based on server_name nginx will look only > through server{} blocks which have the listen socket defined. > > That is, if you have > > server { > listen 80; > server_name default; > } > > server { > listen 10.1.2.3:80; > server_name example.com; > } > > nginx will never consider "default" server if connection comes to > 10.1.2.3:80. All requests to 10.1.2.3:80 will end up in > "example.com" server as it's the only server defined for the > listen socket in question. > > More details may be found here: > > > http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers > > and in docs. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregoriohc at gmail.com Mon Feb 13 17:42:18 2012 From: gregoriohc at gmail.com (=?ISO-8859-1?Q?Gregorio_Hern=E1ndez_Caso?=) Date: Mon, 13 Feb 2012 18:42:18 +0100 Subject: Serving dynamically generated files Message-ID: Hi everybody, I'm trying to serve dynamically generated TXT files using PHP through fastcgi_pass, but I have a problem that I can't solve. This is what I have done: - The TXT files are going to be used by a "GPRS printer" that read them from an url (ex: http://server/1234.txt) - I have configured Nginx to rewrite the request of this files to a PHP script - The PHP script returns the TXT contents and the same headers as for an static TXT file - If a load the URL on a browser, the content and the headers are fine. In fact, if I create the static TXT file with the same content of the PHP generated, my browser shows me exactly the same. My problem is that the "printer" is not reading the TXT file correctly. I know that the "printer" is working, because of two things: - If a create the static 1234.txt file, the "printer" prints it correctly - When the printer reads the dynamically generated TXT file, the PHP scripts also sends me an email, so I'm sure that the printer is connecting to the URL. So, my question is... ?is there any difference of how Nginx serves static files towards dynamically generated ones? I've researched through internet and I cannot find an answer :-( Thanks in advance! Gregorio -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Mon Feb 13 17:57:23 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 13 Feb 2012 17:57:23 +0000 Subject: .htaccess issues In-Reply-To: References: <877gzusife.wl%appa@perusio.net> <1644536179-1328897333-cardhu_decombobulator_blackberry.rim.net-685460195-@b4.c10.bise7.blackberry> <8739aisbnk.wl%appa@perusio.net> Message-ID: <87sjie1urg.wl%appa@perusio.net> On 12 Fev 2012 15h32 WET, guilherme.e at gmail.com wrote: You can use the auth_request module for that then. http://mdounin.ru/hg/ngx_http_auth_request_module I've replicated the mercurial repo on github: https://github.com/perusio/nginx-auth-request-module It involves setting up a location that proxy_pass(es) to the Apache upstream and returns 403 if not allowed to access. Be careful with the X-Forwarded-For header and how it's treated on the Apache side so that you get a real correspondence with the client, thus making the authorization procedure reliable. --- appa From nginxyz at mail.ru Mon Feb 13 18:04:42 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 13 Feb 2012 22:04:42 +0400 Subject: Serving dynamically generated files In-Reply-To: References: Message-ID: 13 ??????? 2012, 21:42 ?? Gregorio Hern?ndez Caso : > Hi everybody, > > I'm trying to serve dynamically generated TXT files using PHP through > fastcgi_pass, but I have a problem that I can't solve. > This is what I have done: > > - The TXT files are going to be used by a "GPRS printer" that read them > from an url (ex: http://server/1234.txt) > - I have configured Nginx to rewrite the request of this files to a PHP > script > - The PHP script returns the TXT contents and the same headers as for an > static TXT file > - If a load the URL on a browser, the content and the headers are fine. > In fact, if I create the static TXT file with the same content of the PHP > generated, my browser shows me exactly the same. > > My problem is that the "printer" is not reading the TXT file correctly. > I know that the "printer" is working, because of two things: > > - If a create the static 1234.txt file, the "printer" prints it correctly > - When the printer reads the dynamically generated TXT file, the PHP > scripts also sends me an email, so I'm sure that the printer is connecting > to the URL. > > So, my question is... ?is there any difference of how Nginx serves static > files towards dynamically generated ones? > > I've researched through internet and I cannot find an answer :-( First run "curl -v http://server/1234.txt" to make sure your PHP script is generating correct headers - in this case you should be sending "Content-Type: text/plain\r\n\r\n" before the actual content. A missing Content-Length header could also cause problems. Next, either enable debug level logging in nginx: error_log /var/log/nginx.error.log debug; or just use "nc -v -l 8080" on your server and have your "GPRS printer" connect to http://server:8080/1234.txt so you can check its request headers. Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 13 20:49:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Feb 2012 00:49:48 +0400 Subject: Binding nginx to a single interface In-Reply-To: References: <20120209175433.GO67687@mdounin.ru> <20120209233420.GT67687@mdounin.ru> Message-ID: <20120213204948.GT67687@mdounin.ru> Hello! On Mon, Feb 13, 2012 at 11:08:25AM -0500, Bai Shen wrote: > But I'm not defining an ip server_name. Isn't nginx listening for > server_names? > > Right now I have example.com rewriting to www.example.com They both listen > on 10.1.2.3. Previously, I could connect to 10.1.2.3 and it would redirect > me to the web server. Now when I connect to 10.1.2.3 it rewrites the url > to www.example.com and because I'm internal, that never resolves. How do I > set nginx to do the redirect without the rewrite? Sorry, I wasn't able to understand your question. Though overal you probably need to set server_name_in_redirect to off, see here: http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name_in_redirect And you probably want to re-read the article here: http://nginx.org/en/docs/http/request_processing.html Maxim Dounin > > On Thu, Feb 9, 2012 at 6:34 PM, Maxim Dounin wrote: > > > Hello! > > > > On Thu, Feb 09, 2012 at 01:06:14PM -0500, Bai Shen wrote: > > > > > They do. > > > > > > However, I do have some weird behaviour. I have the server_name set to > > > www.example.com and that correctly connects me to my web server. But > > if I > > > type in 10.1.2.3, that connects me to my web server as well, even though > > I > > > don't have a default rule setup. > > > > > > When I go to 10.1.2.4 I get a "Welcome to nginx!" page. > > > > When selecting server{} based on server_name nginx will look only > > through server{} blocks which have the listen socket defined. > > > > That is, if you have > > > > server { > > listen 80; > > server_name default; > > } > > > > server { > > listen 10.1.2.3:80; > > server_name example.com; > > } > > > > nginx will never consider "default" server if connection comes to > > 10.1.2.3:80. All requests to 10.1.2.3:80 will end up in > > "example.com" server as it's the only server defined for the > > listen socket in question. > > > > More details may be found here: > > > > > > http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers > > > > and in docs. > > > > Maxim Dounin > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginxyz at mail.ru Mon Feb 13 21:13:21 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Tue, 14 Feb 2012 01:13:21 +0400 Subject: .htaccess issues In-Reply-To: <87sjie1urg.wlappa@perusio.net> References: <87sjie1urg.wlappa@perusio.net> Message-ID: 13 ??????? 2012, 21:58 ?? Ant?nio P. P. Almeida : > On 12 Fev 2012 15h32 WET, guilherme.e at gmail.com wrote: > > You can use the auth_request module for that then. > > http://mdounin.ru/hg/ngx_http_auth_request_module > > I've replicated the mercurial repo on github: > > https://github.com/perusio/nginx-auth-request-module > > It involves setting up a location that proxy_pass(es) to the Apache > upstream and returns 403 if not allowed to access. Maxim's auth_request module is great, but AFAIK, it doesn't support caching, which makes it unsuited to the OP's situation because the OP wants to cache large files from the backend server(s). The access_by_lua solution I proposed, on the other hand, does make it possible to cache the content, and if one should want, even the IP-based authorization information in a separate cache zone. Max From appa at perusio.net Mon Feb 13 21:43:55 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 13 Feb 2012 21:43:55 +0000 Subject: .htaccess issues In-Reply-To: References: <87sjie1urg.wlappa@perusio.net> Message-ID: <87lio61k9w.wl%appa@perusio.net> On 13 Fev 2012 21h13 WET, nginxyz at mail.ru wrote: > > 13 ??????? 2012, 21:58 ?? Ant?nio P. P. Almeida : >> On 12 Fev 2012 15h32 WET, guilherme.e at gmail.com wrote: >> >> You can use the auth_request module for that then. >> >> http://mdounin.ru/hg/ngx_http_auth_request_module >> >> I've replicated the mercurial repo on github: >> >> https://github.com/perusio/nginx-auth-request-module >> >> It involves setting up a location that proxy_pass(es) to the Apache >> upstream and returns 403 if not allowed to access. > > Maxim's auth_request module is great, but AFAIK, it doesn't > support caching, which makes it unsuited to the OP's > situation because the OP wants to cache large files from > the backend server(s). > > The access_by_lua solution I proposed, on the other hand, > does make it possible to cache the content, and if one should > want, even the IP-based authorization information in a > separate cache zone. AFAIK the authorization occurs well before any content is served. What does that have to with caching? Using access_by_lua with a subrequest like you suggested is, AFAICT, equivalent to using auth_request. IIRC the OP wanted first to check if a given client could access a certain file. If it can, then it gets the content from the cache or whatever he decides. --- appa From mdounin at mdounin.ru Mon Feb 13 22:39:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Feb 2012 02:39:32 +0400 Subject: Help with debugging - stall on response size is a symptom but what is the cause? In-Reply-To: References: <20120210154942.GC67687@mdounin.ru> Message-ID: <20120213223932.GV67687@mdounin.ru> Hello! On Sun, Feb 12, 2012 at 09:21:35PM -0500, LetsPlay wrote: > > nginx -V > nginx: nginx version: nginx/1.0.5 > nginx: TLS SNI support enabled > nginx: configure arguments: --prefix=/etc/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi > --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid > --with-debug --with-http_addition_module --with-http_dav_module > --with-http_geoip_module --with-http_gzip_static_module > --with-http_image_filter_module --with-http_realip_module > --with-http_stub_status_module --with-http_ssl_module > --with-http_sub_module --with-http_xslt_module --with-ipv6 > --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl > --with-mail --with-mail_ssl_module > --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-echo > --add-module=/build/buildd/nginx-1.0.5/debian/modules/nginx-upstream-fair > > Logs: http://pastebin.com/W4Tf0vcF > > Here is a summary again: > > Details: > System: Ubuntu 11.10, Play 1.2.4 unzipped and installed, Nginx 1.0.5 > via apt-get > Network: www.myApp.com points to a static IP, which is a modem/router > with LAN address 192.168.1.1 wired to my box 192.168.1.13. The router > uses the NAT rule "forward port 80 to 192.168.1.13" (my box) > Note that local access (http://localhost:9027) works fine in all cases > Case 1: WORKS vanilla "play new myApp" works fine at > http://localhost:9027 > Case 2: FAIL vanilla "play new myApp" times out at http://www.myApp.com, > > nothing received by browser > Case 3: WORKS hacked index.html with short toy css, works via > www.myApp.com > Case 4: FAIL Case 3 with lengthened css, long delay via www.myApp.com, > css not used > Case 5: WORKS Case 4 but with nginx "proxy_buffering off;", works via > www.myApp.com > Case 6: FAIL Case 5 with an extra css comment line, long delay via > www.myApp.com, css not used Please try connecting nginx directly, not via NAT. There is nothing wrong in logs as far as I see, and described problem looks very similar to classic MTU issues. See here for details: http://en.wikipedia.org/wiki/Maximum_transmission_unit#Path_MTU_Discovery Maxim Dounin From quintinpar at gmail.com Tue Feb 14 02:08:40 2012 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 14 Feb 2012 07:38:40 +0530 Subject: =?UTF-8?Q?Re=3A_Sharing_rate_limiting_data_between_multiple_nginx_LB?= =?UTF-8?Q?=E2=80=99s?= In-Reply-To: References: Message-ID: Can someone help please... -Quintin On Mon, Feb 13, 2012 at 1:11 PM, Quintin Par wrote: > I have multiple nginx machines running and proxy LB through a round robin > DNS mechanism. > > I do rate limiting as follows > > limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; > > location / { > > limit_req zone=pw burst=5 nodelay; > > But this is per machine. Can this data be shared between the load > balancers so that rate limiting is global and I can scale out. > > -Quintin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Tue Feb 14 07:48:07 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Tue, 14 Feb 2012 11:48:07 +0400 Subject: .htaccess issues In-Reply-To: <87lio61k9w.wlappa@perusio.net> References: <87lio61k9w.wlappa@perusio.net> Message-ID: 14 ??????? 2012, 01:44 ?? Ant?nio P. P. Almeida : > On 13 Fev 2012 21h13 WET, nginxyz at mail.ru wrote: > > > > > 13 ??????? 2012, 21:58 ?? Ant?nio P. P. Almeida : > >> On 12 Fev 2012 15h32 WET, guilherme.e at gmail.com wrote: > >> > >> You can use the auth_request module for that then. > >> > >> http://mdounin.ru/hg/ngx_http_auth_request_module > >> > >> I've replicated the mercurial repo on github: > >> > >> https://github.com/perusio/nginx-auth-request-module > >> > >> It involves setting up a location that proxy_pass(es) to the Apache > >> upstream and returns 403 if not allowed to access. > > > > Maxim's auth_request module is great, but AFAIK, it doesn't > > support caching, which makes it unsuited to the OP's > > situation because the OP wants to cache large files from > > the backend server(s). > > > > The access_by_lua solution I proposed, on the other hand, > > does make it possible to cache the content, and if one should > > want, even the IP-based authorization information in a > > separate cache zone. > > AFAIK the authorization occurs well before any content is served. What > does that have to with caching? > > Using access_by_lua with a subrequest like you suggested is, AFAICT, > equivalent to using auth_request. > > IIRC the OP wanted first to check if a given client could access a > certain file. If it can, then it gets the content from the cache or > whatever he decides. Have you ever actually used the auth_request module? Or have you at least read the part of the auth_request module README file where Maxim wrote: "Note: it is not currently possible to use proxy_cache/proxy_store (and fastcgi_cache/fastcgi_store) for requests initiated by auth request module." Let's take the example from Maxim's README file: location /private/ { auth_request /auth; ... } location = /auth { proxy_pass ... proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } Let's say you configure caching in the /private/ location block, and the cache is empty. The first matching request would get passed on to the backend server, which would send back the latest requested file, if the request was allowed. The frontend server would then store the file in the cache and send it back to the client, as expected. The next matching request would again be passed on to the backend server, which would again send back the latest requested file, if the request was allowed, but this time the frontend server would send back to the client NOT the LATEST file, but the OLD file from the CACHE. The old file would remain in the cache, from where it would keep getting sent back to clients until it expired, while each new allowed request would cause the latest requested file to be retrieved from the backend server and then DISCARDED. Turning the proxy_no_cache directive on would prevent anything from being stored in the cache, as expected. Turning the proxy_bypass directive on would cause the cache to be bypassed, and the latest requested file to be both sent back to the client and stored in the cache each time (as long as proxy_no_cache wasn't turned on), but either way you'd end up retrieving the file from the backend server on every request, which defeats the purpose of caching. However, forbidden response codes from the backend server are always correctly sent back to clients, and are never cached. Now, let's say you've given up on caching in the /private/ location block and decided to configure caching in the /auth location block. Again, the cache is empty. Here the first matching request passed on from the /private/ location block would be sent on to the backend server, which would send back the latest requested file, if the request was allowed. The frontend server would then store this file in the cache, but instead of sending it back to the client, it would just TERMINATE the connection (444-style)! The next matching request would again get passed on to the backend server, which would again send back the latest requested file, if the request was allowed, and in this case, the frontend server would send back the latest requested file to the client, but ONLY if there was an EXISTING cache entry for the request cache key! If there was NO cache entry for the request cache key, then the requested file would get retrieved from the backend server and stored in the cache, if the request was allowed, but NOTHING would be sent back to the client and the connection would be TERMINATED 444-style. Once a file got stored in the cache, it would REMAIN in the CACHE until it expired, while each new allowed request would cause the latest requested file to be retrieved from the backend, sent back to the client and DISCARDED without replacing the old file in the cache. If there was no cache entry for a request cache key and the proxy_no_cache directive was turned on, then each and every request would cause the requested file to be retrieved from the backend server and discarded, while the connection would ALWAYS be TERMINATED 444-style. Turning the proxy_bypass directive on would cause the cache to be bypassed, and the latest requested file to be retrieved from the backend server and stored in the cache, but nothing would be sent back to the client, and the connection would again be terminated 444-style. So, as you can surely see by now, using caching with the auth_request module not only defeats the purpose of caching, but also violates the expected functionality in serious and totally unexpected ways. The access_by_lua solution I proposed, on the other hand, can safely be used with caching. Maxim, feel free to add this explanation to the auth_request module README file. Max From hyperstruct at gmail.com Tue Feb 14 12:05:29 2012 From: hyperstruct at gmail.com (Massimiliano Mirra) Date: Tue, 14 Feb 2012 13:05:29 +0100 Subject: Can proxy_cache gzip cached content? Message-ID: Hello, I'm looking at using Nginx as a reverse proxy to cache a few millions HTML pages coming from a backend server. The cached content will very seldom (if at all) change so both proxy_cache and proxy_store could do, but all page URLs have a "/foo/$ID" pattern and IIUC with proxy_store that would cause millions of files in the same directory, which the filesystem might not be ecstatic about. So for now I'm going with proxy_cache and two levels of directories. All is going great in my preliminary tests. Now, rather than caching uncompressed files and gzipping them before serving them most of the time, it would be great if cached content could be gzipped once (on disk) and served as such most of the time. This would decrease both disk space requirements (by 7-8 times) and processor load. Is this doable? Patching/recompiling nginx as well as using Lua are fine with me. Serving gzipped content from the backend would in theory be possible though for other reasons better avoided. Thanks for any insight! Massimiliano -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 14 12:33:56 2012 From: nginx-forum at nginx.us (rishabh) Date: Tue, 14 Feb 2012 07:33:56 -0500 Subject: Send SubRequest after the Response is shown to the user Message-ID: <23718ecd3bad4841f822944b5cf601ea.NginxMailingListEnglish@forum.nginx.org> I tried using post_action but it was causing delay: I think this is how post_action works, please correct if am wrong. 1-> Nginx get the REQUEST 2-> Nginx receives the RESPONSE generated by PHP 3-> Nginx sends a SUB_REQUEST to 2nd server (using post_action then proxy_pass) 4-> Nginx recieves the RESPONSE from the 2nd server 5-> Nginx shows RESPONSE to user. Here at stage 3 & 4 there is an unecessary delay. The subrequest I am sending is just for analytics and dont want the response to the user be delayed. The subrequest does not modify the response. Here is what i want to achieve: 1-> Nginx get the REQUEST 2-> Nginx receives the RESPONSE generated by PHP 3-> Nginx shows RESPONSE to USER. 4-> Nginx generates a SUB_REQUEST to 2nd server 5-> Nginx recieves the RESPONSE from the 2nd server (optional) Is there any asynchronous module which can be used to achieve the above flow ? my conf file file looks like this http { server { location /sendlogging { internal; proxy_pass http://localhost:8080/index.php; } if($uri = /sendlogging) { break; } location / { .... post_action /sendlogging; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222383,222383#msg-222383 From nginx-forum at nginx.us Tue Feb 14 13:49:47 2012 From: nginx-forum at nginx.us (rmalayter) Date: Tue, 14 Feb 2012 08:49:47 -0500 Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: Just make sure the "Accept-Encoding: gzip" is being passed to your back-end, and let the back end do the compression. We actually normalize the Accept-Encoding header as well with an if statement. Also use the value of the Accept-Encoding header in your proxy_cache_key. This allows non-cached responses for those clients that don't support gzip (usually coming through an old, weird proxy). So you will get both compressed and uncompressed versions in your cache, but with our clients it's like 99% compressed versions at any one time. Example: server { #your server stuff here #normalize all accept-encoding headers to just gzip set $myae ""; if ($http_accept_encoding ~* gzip) { set $myae "gzip"; } location / { proxy_pass http://backend; #the following allows comressed responses from backend proxy_set_header Accept-Encoding $myae; proxy_cache zone1; proxy_cache_key "$request_uri $myae"; proxy_cache_valid 5m; proxy_cache_use_stale error updating; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222382,222391#msg-222391 From appa at perusio.net Tue Feb 14 14:33:05 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 14 Feb 2012 14:33:05 +0000 Subject: .htaccess issues In-Reply-To: References: <87lio61k9w.wlappa@perusio.net> Message-ID: <87ipj91o4e.wl%appa@perusio.net> On 14 Fev 2012 07h48 WET, nginxyz at mail.ru wrote: > Have you ever actually used the auth_request module? Or have you at > least read the part of the auth_request module README file where > Maxim wrote: location /private { } location /private/ { error_page 403 /403.html; auth_request /auth; try_files /cache?q=$uri =404; # there's a bug in 1.1.14 this won't work } location = /auth { proxy_pass ... proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } location /cache { internal; # usual cache stuff proxy_pass http://backend$arg_q; } It works for me here. I can post the debug log if necessary. --- appa From appa at perusio.net Tue Feb 14 14:52:46 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 14 Feb 2012 14:52:46 +0000 Subject: .htaccess issues In-Reply-To: References: <87lio61k9w.wlappa@perusio.net> Message-ID: <87hayt1n7l.wl%appa@perusio.net> On 14 Fev 2012 07h48 WET, nginxyz at mail.ru wrote: > Have you ever actually used the auth_request module? Or have you at > least read the part of the auth_request module README file where > Maxim wrote: Just to say that it works also with the cache on private: location /private/ { error_page 403 /403.html; auth_request /auth; # usual cache stuff proxy_pass http://backend; } location = /auth { proxy_pass ... proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; # It's *here* that you cannot cache... } The previous was just an example where the cache location was really private. It cannot be accessed directly. I suspect the reason it cannot be cached is simply because not only it would defeat the authorization purpose as well due to the fact that this module doesn't care about the request body. It only deals with the headers. --- appa From nginx-forum at nginx.us Tue Feb 14 15:45:28 2012 From: nginx-forum at nginx.us (Ralf) Date: Tue, 14 Feb 2012 10:45:28 -0500 Subject: How to process Facebook signed request to determine proxy target? Message-ID: <86587b038728bf0418d9dd1f7b9917f6.NginxMailingListEnglish@forum.nginx.org> Hello all, we're using a Nginx as proxy gateway to several customer specific apache instances on different servers. In order to determine where to route the traffic from a Facebook application, we need to extract the page id from the Facebook signed request (http://developers.facebook.com/docs/authentication/signed_request/) and look up its respective target apache instance in a table (probably from memcached). As I don't have much experience with Nginx yet, I'd like to ask for your feedback on some theoretical solutions I've come up with: 1) Extract the signed request from the POST data with the HttpFormInputModule or HttpLuaModule and use HttpMemcModule for interactions with memcached. >From my understanding, this whole operation could be defined in the nginx.conf or site.conf file then, which is favorable, but I haven't found out yet if I'd be actually be able to decode the signed request for processing. Is there maybe another third party module available for this? 2) Use the EmbeddedPerlModule, which is still in experimental stage as I understand, and write a perl script similar to the php script posted on the Facebook link above to retrieve the required page id value. Then access memcached from within the script or with the HttpMemcModule. 3) Write my own Nginx module. What would you suggest is the best way to go about this or do you maybe have better solutions? Thanks, Ralf Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222397,222397#msg-222397 From guilherme.e at gmail.com Tue Feb 14 16:47:19 2012 From: guilherme.e at gmail.com (Guilherme) Date: Tue, 14 Feb 2012 14:47:19 -0200 Subject: .htaccess issues In-Reply-To: <87hayt1n7l.wl%appa@perusio.net> References: <87lio61k9w.wlappa@perusio.net> <87hayt1n7l.wl%appa@perusio.net> Message-ID: I'll take a look in lua and auth_request module. Thanks for the suggestions. It was helpful! On Tue, Feb 14, 2012 at 12:52 PM, Ant?nio P. P. Almeida wrote: > On 14 Fev 2012 07h48 WET, nginxyz at mail.ru wrote: > > > Have you ever actually used the auth_request module? Or have you at > > least read the part of the auth_request module README file where > > Maxim wrote: > > > > Just to say that it works also with the cache on private: > > location /private/ { > error_page 403 /403.html; > auth_request /auth; > # usual cache stuff > proxy_pass http://backend; > } > > location = /auth { > proxy_pass ... > proxy_pass_request_body off; > proxy_set_header Content-Length ""; > proxy_set_header X-Original-URI $request_uri; > # It's *here* that you cannot cache... > } > > The previous was just an example where the cache location was really > private. It cannot be accessed directly. > > I suspect the reason it cannot be cached is simply because not only it > would defeat the authorization purpose as well due to the fact that > this module doesn't care about the request body. It only deals with > the headers. > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikemc-nginx at terabytemedia.com Tue Feb 14 18:46:24 2012 From: mikemc-nginx at terabytemedia.com (Michael McCallister) Date: Tue, 14 Feb 2012 11:46:24 -0700 Subject: filter out headers for fastcgi cache In-Reply-To: References: <4F3458AF.9080507@terabytemedia.com> Message-ID: <4F3AAC00.4020609@terabytemedia.com> Max wrote, On 02/10/2012 03:54 PM: > http://wiki.nginx.org/NginxHttpProxyModule#proxy_hide_header > > The proxy_hide_header directive does exactly what you described, > the cookies get stored in the cache, but they are not passed back > to the client, so this is all you'd need: > > # Cache responses containing the Set-Cookie header as well > fastcgi_ignore_headers Set-Cookie; > > # Strip the Set-Cookie header from cached content > # when passing cached content back to clients > proxy_hide_header Set-Cookie; > > You could also include session cookies in the fastcgi_cache_key > to make sure new users get the default cached content, while > everyone else gets their session-specific cached content: > > fastcgi_cache_key "$cookie_PHPSESSID$scheme$request_method$host$server_port$request_uri"; > > Max Thanks Max! From hyperstruct at gmail.com Tue Feb 14 20:02:35 2012 From: hyperstruct at gmail.com (Massimiliano Mirra) Date: Tue, 14 Feb 2012 21:02:35 +0100 Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: Thanks for the pointers. As I wrote, I'd rather avoid gzipping in the backend, but if that's the only option so be it. I was also concerned about caching gzipped content using the value of Accept-Encoding in the cache key and ending with many duplicates because of slightly different yet equivalent headers, but your suggestion to normalize it solves it nicely. Cheers, Massimiliano On Tue, Feb 14, 2012 at 2:49 PM, rmalayter wrote: > Just make sure the "Accept-Encoding: gzip" is being passed to your > back-end, and let the back end do the compression. We actually normalize > the Accept-Encoding header as well with an if statement. Also use the > value of the Accept-Encoding header in your proxy_cache_key. This allows > non-cached responses for those clients that don't support gzip (usually > coming through an old, weird proxy). So you will get both compressed and > uncompressed versions in your cache, but with our clients it's like 99% > compressed versions at any one time. > > Example: > > server { > > #your server stuff here > > #normalize all accept-encoding headers to just gzip > set $myae ""; > if ($http_accept_encoding ~* gzip) { > set $myae "gzip"; > } > > location / { > proxy_pass http://backend; > #the following allows comressed responses from backend > proxy_set_header Accept-Encoding $myae; > > proxy_cache zone1; > proxy_cache_key "$request_uri $myae"; > proxy_cache_valid 5m; > proxy_cache_use_stale error updating; > > } > > > > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222382,222391#msg-222391 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Wed Feb 15 00:36:52 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 15 Feb 2012 04:36:52 +0400 Subject: .htaccess issues In-Reply-To: <87hayt1n7l.wlappa@perusio.net> References: <87hayt1n7l.wlappa@perusio.net> Message-ID: 14 ??????? 2012, 18:53 ?? Ant?nio P. P. Almeida : > Just to say that it works also with the cache on private: > > location /private/ { > error_page 403 /403.html; > auth_request /auth; > # usual cache stuff > proxy_pass http://backend; > } > > location = /auth { > proxy_pass ... > proxy_pass_request_body off; > proxy_set_header Content-Length ""; > proxy_set_header X-Original-URI $request_uri; > # It's *here* that you cannot cache... > } > > The previous was just an example where the cache location was really > private. It cannot be accessed directly. > > I suspect the reason it cannot be cached is simply because not only it > would defeat the authorization purpose as well due to the fact that > this module doesn't care about the request body. It only deals with > the headers. You may want to read my previous post again. I wrote that caching in the /private/ location block IS possible, but that each request initiates an authorization subrequest, which - if allowed - retrieves the requested file each and every time from the backend server and then discards this file without updating the cache, and THEN the original request retrieves the requested file from the backend server AGAIN and stores it in the cache, where it remains until it expires. If you used the auth_request module without caching, each allowed request would cause the same requested file to be retrieved twice from the backend server. With caching, each request would cause the requested file to be retrieved at least once if it was already in the cache, and twice if it was not in the cache, which defeats the purpose of caching, so you could just make sure the X-Forwarded-For header is set correctly and pass requests directly to the backend server. AFAIK, the only way you can reduce the overhead of constantly retrieving the entire file on each authorization subrequest would be to use "proxy_method HEAD;" inside the authorization subrequest location block, just like I did in my access_by_lua /test_access/ authorization subrequest location block. Luckily, the auth_request module seems to work properly with the proxy_method set to HEAD, at least in nginx version 1.1.14, so you should always use "proxy_method HEAD;" in auth_request module authorization subrequest location blocks to prevent entire files from being retrieved on each authorization subrequest. But if you use the auth_request module, you will still be unable to do safe and reliable caching in the authorization subrequest location block, while you can do any kind of caching safely and reliably in the access_by_lua authorization subrequest location block (/test_access/) - for example, you could set up a cache key such as "$remote_addr$host$server_port$uri" to cache authorization information to reduce the overhead even more without compromising IP-based access control. The auth_request module doesn't work with caching enabled in the authorization subrequest location block because its post subrequest handler only checks the subrequest response status code, while the access_by_lua post subrequest handler not only checks the subrequest response status code, but also restores the event handlers, and properly copies the subrequest response headers and body. BTW, nice work, Agentzh. :-) Max From nginxyz at mail.ru Wed Feb 15 00:56:38 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 15 Feb 2012 04:56:38 +0400 Subject: filter out headers for fastcgi cache In-Reply-To: <4F3AAC00.4020609@terabytemedia.com> References: <4F3458AF.9080507@terabytemedia.com> <4F3AAC00.4020609@terabytemedia.com> Message-ID: 14 ??????? 2012, 22:48 ?? Michael McCallister : > Max wrote, On 02/10/2012 03:54 PM: > > http://wiki.nginx.org/NginxHttpProxyModule#proxy_hide_header > > > > The proxy_hide_header directive does exactly what you described, > > the cookies get stored in the cache, but they are not passed back > > to the client, so this is all you'd need: > > > > # Cache responses containing the Set-Cookie header as well > > fastcgi_ignore_headers Set-Cookie; > > > > # Strip the Set-Cookie header from cached content > > # when passing cached content back to clients > > proxy_hide_header Set-Cookie; > > > > You could also include session cookies in the fastcgi_cache_key > > to make sure new users get the default cached content, while > > everyone else gets their session-specific cached content: > > > > fastcgi_cache_key > "$cookie_PHPSESSID$scheme$request_method$host$server_port$request_uri"; > > > > Max > > Thanks Max! You're welcome! Just run s/proxy_/fastcgi_/ on the above to get the fastcgi directives. Those directives are the same in all of the (fastcgi|proxy|scgi|uwsgi) modules: (fastcgi|proxy|scgi|uwsgi)_ignore_headers http://wiki.nginx.org/NginxHttpFcgiModule#fastcgi_ignore_headers (fastcgi|proxy|scgi|uwsgi)_hide_header http://wiki.nginx.org/NginxHttpFcgiModule#fastcgi_hide_header Max From piotr.sikora at frickle.com Wed Feb 15 02:03:01 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Wed, 15 Feb 2012 03:03:01 +0100 Subject: .htaccess issues In-Reply-To: References: <87hayt1n7l.wlappa@perusio.net> Message-ID: Max, > You may want to read my previous post again. I wrote that caching > in the /private/ location block IS possible, but that each request > initiates an authorization subrequest, which - if allowed - retrieves > the requested file each and every time from the backend server > and then discards this file without updating the cache, and THEN > the original request retrieves the requested file from the backend > server AGAIN and stores it in the cache, where it remains until it > expires. > (...) After reading your last 2 responses, I believe that you're either seriously misusing auth_request module or you simply don't understand how it's supposed to work. Long story short, Antonio is right and you're not. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From quintinpar at gmail.com Wed Feb 15 04:33:13 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 15 Feb 2012 10:03:13 +0530 Subject: Basic Auth only for external IPs and not localhost or LAN networks Message-ID: Hi all, I have a location directive with basic auth in it. location / { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/.htpasswd; How do I specify a rule such that the basic auth is applied only to external IPs and not to 127.0.0.x, 192.0.x & 10.0.x? I run Jenkins from a sub-domain and my git post-commit-hook needs to hit a URL under this location directive to trigger continuous integration. But this Jenkins cannot handle basic auth that blocks the URL submit. -Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 15 05:01:04 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 15 Feb 2012 05:01:04 +0000 Subject: Basic Auth only for external IPs and not localhost or LAN networks In-Reply-To: References: Message-ID: <8762f81yi7.wl%appa@perusio.net> On 15 Fev 2012 04h33 WET, quintinpar at gmail.com wrote: > Hi all, > > I have a location directive with basic auth in it. > > location / { > > auth_basic "Admin Login"; > > auth_basic_user_file /etc/nginx/.htpasswd; > } > How do I specify a rule such that the basic auth is applied only to > external IPs and not to 127.0.0.x, 192.0.x & 10.0.x? > > I run Jenkins from a sub-domain and my git post-commit-hook needs to > hit a URL under this location directive to trigger continuous > integration. But this Jenkins cannot handle basic auth that blocks > the URL submit. > At the http level: geo $is_authorized { default 0; 127.0.0.1 1; 192.0.0.0/16 1; 10.0.0.0/16 1; } On the vhost: location / { error_page 418 @no-auth; if ($is_authorized) { return 418; } auth_basic "Admin Login"; auth_basic_user_file .htpasswd; # ... content handler directives here or default (static) } location @no-auth { # ... content handler directives here or default (static) } --- appa From appa at perusio.net Wed Feb 15 05:47:17 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 15 Feb 2012 05:47:17 +0000 Subject: Basic Auth only for external IPs and not localhost or LAN networks In-Reply-To: <8762f81yi7.wl%appa@perusio.net> References: <8762f81yi7.wl%appa@perusio.net> Message-ID: <8739ac1wd6.wl%appa@perusio.net> On 15 Fev 2012 05h01 WET, appa at perusio.net wrote: > On 15 Fev 2012 04h33 WET, quintinpar at gmail.com wrote: > >> Hi all, >> >> I have a location directive with basic auth in it. >> >> location / { >> >> auth_basic "Admin Login"; >> >> auth_basic_user_file /etc/nginx/.htpasswd; >> } > >> How do I specify a rule such that the basic auth is applied only to >> external IPs and not to 127.0.0.x, 192.0.x & 10.0.x? >> >> I run Jenkins from a sub-domain and my git post-commit-hook needs >> to hit a URL under this location directive to trigger continuous >> integration. But this Jenkins cannot handle basic auth that blocks >> the URL submit. >> > > At the http level: > > geo $is_authorized { > default 0; > 127.0.0.1 1; > 192.0.0.0/16 1; > 10.0.0.0/16 1; > } > Also using auth_request (avoids duplicating the location): location / { auth_basic "Admin Login"; auth_basic_user_file .htpasswd; satisfy any; auth_request /auth; # ... content handler directives here or default (static) } location /auth { if ($is_authorized) { return 200; } return 403; } --- appa From nginx at abraumhal.de Wed Feb 15 07:45:51 2012 From: nginx at abraumhal.de (Sven Ludwig) Date: Wed, 15 Feb 2012 08:45:51 +0100 Subject: I patched the spnego module Message-ID: <20120215074551.GQ23688@Debian-60-squeeze-64-minimal> Hi, i only want to announce, that i patched the spnego module, so you can use it as a 100% replacement to apache2+mod_auth_kerb. https://github.com/muhgatus/spnego-http-auth-nginx-module I installed in on 3 hosts running now for about 2 weeks without an error. Perhaps somebody wants to test it too? :) bye MUH! -- ;__;___, )..(_=_) (oo)| | #!/usr/bin/env python domain={'r4w.de': 'i hate perl'} domainparts=['python','blog','r4w','de'] print domain['.'.join(domainparts[[i for i in range(0,len(domainparts)) if domain.has_key('.'.join(domainparts[i:]))][0]:])] From mdounin at mdounin.ru Wed Feb 15 09:09:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Feb 2012 13:09:32 +0400 Subject: Basic Auth only for external IPs and not localhost or LAN networks In-Reply-To: References: Message-ID: <20120215090932.GF67687@mdounin.ru> Hello! On Wed, Feb 15, 2012 at 10:03:13AM +0530, Quintin Par wrote: > Hi all, > > I have a location directive with basic auth in it. > > location / { > > auth_basic "Admin Login"; > > auth_basic_user_file /etc/nginx/.htpasswd; > > How do I specify a rule such that the basic auth is applied only to > external IPs and not to 127.0.0.x, 192.0.x & 10.0.x? Use "satisfy any", see http://www.nginx.org/en/docs/http/ngx_http_core_module.html#satisfy location / { satisfy any; auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/.htpasswd; allow 127.0.0.0/24; allow 192.0.0.0/16; allow 10.0.0.0/16; deny all; } Just a side note: the "192.0.x" should probably be "192.168.x" instead, but you should get the idea anyway. Maxim Dounin From quintinpar at gmail.com Wed Feb 15 10:16:27 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 15 Feb 2012 15:46:27 +0530 Subject: Basic Auth only for external IPs and not localhost or LAN networks In-Reply-To: <20120215090932.GF67687@mdounin.ru> References: <20120215090932.GF67687@mdounin.ru> Message-ID: Ha! What a simple solution. Thanks a lot! -Quintin On Wed, Feb 15, 2012 at 2:39 PM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 15, 2012 at 10:03:13AM +0530, Quintin Par wrote: > > > Hi all, > > > > I have a location directive with basic auth in it. > > > > location / { > > > > auth_basic "Admin Login"; > > > > auth_basic_user_file /etc/nginx/.htpasswd; > > > > How do I specify a rule such that the basic auth is applied only to > > external IPs and not to 127.0.0.x, 192.0.x & 10.0.x? > > Use "satisfy any", see > http://www.nginx.org/en/docs/http/ngx_http_core_module.html#satisfy > > location / { > satisfy any; > > auth_basic "Admin Login"; > auth_basic_user_file /etc/nginx/.htpasswd; > > allow 127.0.0.0/24; > allow 192.0.0.0/16; > allow 10.0.0.0/16; > deny all; > } > > Just a side note: the "192.0.x" should probably be "192.168.x" > instead, but you should get the idea anyway. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at mxcrypt.com Wed Feb 15 13:56:49 2012 From: max at mxcrypt.com (Maxim Khitrov) Date: Wed, 15 Feb 2012 08:56:49 -0500 Subject: Making http_auth_request_module a first-class citizen? Message-ID: Hello Maxim, Back in 2010 you wrote that it's not likely that your http_auth_request_module would make it into nginx core. I'm curious if anything has changed over the past two years? It's not that compiling this module into nginx is a problem (especially on FreeBSD), but I think a lot of people are inherently weary of depending on 3rd-party modules, since there is no guarantee of continued support. What do you think about adding your module to the main nginx repository? - Max From mdounin at mdounin.ru Wed Feb 15 14:40:20 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Feb 2012 18:40:20 +0400 Subject: nginx-1.1.15 Message-ID: <20120215144020.GM67687@mdounin.ru> Changes with nginx 1.1.15 15 Feb 2012 *) Feature: the "disable_symlinks" directive. *) Feature: the "proxy_cookie_domain" and "proxy_cookie_path" directives. *) Bugfix: nginx might log incorrect error "upstream prematurely closed connection" instead of correct "upstream sent too big header" one. Thanks to Feibo Li. *) Bugfix: nginx could not be built with the ngx_http_perl_module if the --with-openssl option was used. *) Bugfix: internal redirects to named locations were not limited. *) Bugfix: calling $r->flush() multiple times might cause errors in the ngx_http_gzip_filter_module. *) Bugfix: temporary files might be not removed if the "proxy_store" directive were used with SSI includes. *) Bugfix: in some cases non-cacheable variables (such as the $args variable) returned old empty cached value. *) Bugfix: a segmentation fault might occur in a worker process if too many SSI subrequests were issued simultaneously; the bug had appeared in 0.7.25. Maxim Dounin From mdounin at mdounin.ru Wed Feb 15 14:49:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Feb 2012 18:49:58 +0400 Subject: Making http_auth_request_module a first-class citizen? In-Reply-To: References: Message-ID: <20120215144958.GQ67687@mdounin.ru> Hello! On Wed, Feb 15, 2012 at 08:56:49AM -0500, Maxim Khitrov wrote: > Hello Maxim, > > Back in 2010 you wrote that it's not likely that your > http_auth_request_module would make it into nginx core. I'm curious if > anything has changed over the past two years? > > It's not that compiling this module into nginx is a problem > (especially on FreeBSD), but I think a lot of people are inherently > weary of depending on 3rd-party modules, since there is no guarantee > of continued support. > > What do you think about adding your module to the main nginx repository? There are no immediate plans, but this may happen somewhere in the future. Maxim Dounin From nginx-forum at nginx.us Wed Feb 15 14:55:17 2012 From: nginx-forum at nginx.us (rmalayter) Date: Wed, 15 Feb 2012 09:55:17 -0500 (EST) Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: bard Wrote: ------------------------------------------------------- > Thanks for the pointers. As I wrote, I'd rather > avoid gzipping in the > backend, but if that's the only option so be it. > There's no reason the "backend" for your caching layer cannot be another nginx server block running on a high port bound to localhost. This high-port server block could do gzip compression, and proxy-pass to the back end with "Accept-Encoding: identity", so the back-end never has to do compression. The backend server will have to use "gzip_http_version 1.0" and "gzip_proxied any" to do compression because it is being proxied from the front-end. There might be a moderate performance impact, but because you're caching at the "frontmost" layer, the number of back-end hits should be small. Also note there may be better options in the latest nginx versions, or by using the gunzip 3rd-party module: http://mdounin.ru/hg/ngx_http_gunzip_filter_module/file/27f057249155/README With the gunzip module, you can configure things so that you always cache compressed data, then only decompress it for the small number of clients that don't support gzip compression. -- RPM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222382,222436#msg-222436 From rainer at ultra-secure.de Wed Feb 15 16:10:49 2012 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 15 Feb 2012 17:10:49 +0100 Subject: nginx-1.1.15 In-Reply-To: <20120215144020.GM67687@mdounin.ru> References: <20120215144020.GM67687@mdounin.ru> Message-ID: <20120215171049.0dc89e29@suse2.iptech.internal> Am Wed, 15 Feb 2012 18:40:20 +0400 schrieb Maxim Dounin : > *) Feature: the "proxy_cookie_domain" and "proxy_cookie_path" > directives. Hi, is there documentation for all the new features of nginx 1.1? Best Regards, Rainer From ru at nginx.com Wed Feb 15 16:39:40 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 15 Feb 2012 20:39:40 +0400 Subject: nginx-1.1.15 In-Reply-To: <20120215171049.0dc89e29@suse2.iptech.internal> References: <20120215144020.GM67687@mdounin.ru> <20120215171049.0dc89e29@suse2.iptech.internal> Message-ID: <20120215163940.GB8483@lo0.su> On Wed, Feb 15, 2012 at 05:10:49PM +0100, Rainer Duffner wrote: > Am Wed, 15 Feb 2012 18:40:20 +0400 > schrieb Maxim Dounin : > > > *) Feature: the "proxy_cookie_domain" and "proxy_cookie_path" > > directives. > > > Hi, > > is there documentation for all the new features of nginx 1.1? Should appear on the web site today or tomorrow. From wtymdjs at gmail.com Wed Feb 15 17:32:30 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 12:32:30 -0500 Subject: Rewriting Base URL when passing Message-ID: I'm trying to setup Django through UWSGI using Nginx. I got the UWSGI pass to work using this function location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } Unfortunately when I visit /django/admin. I get an error Page not found (404) Request Method: GET Request URL: http://69.x.x.x/django/admin Using the URLconf defined in Django.urls, Django tried these URL patterns, in this order: ^admin/ How can I have nginx rewrite the url to not pass the /django part? -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 15 17:44:58 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 15 Feb 2012 17:44:58 +0000 Subject: Rewriting Base URL when passing In-Reply-To: References: Message-ID: <87zkckyorp.wl%appa@perusio.net> On 15 Fev 2012 17h32 WET, wtymdjs at gmail.com wrote: > I'm trying to setup Django through UWSGI using Nginx. > > I got the UWSGI pass to work using this function > > location / { > include uwsgi_params; > uwsgi_pass 127.0.0.1:9001; > } > Unfortunately when I visit /django/admin. I get an error > > Page not found (404) Request Method: GET Request URL: > http://69.x.x.x/django/admin Using the URLconf defined in > Django.urls, Django tried these URL patterns, in this order: ^admin/ > > How can I have nginx rewrite the url to not pass the /django part? Mind you, my knowledge of django is within the neighborhood of 0. Probably the easiest way is just to "rewrite" the uwsgi parameters so that the django part is not passed. I have yet to try uWSGI :( --- appa From mike503 at gmail.com Wed Feb 15 17:59:18 2012 From: mike503 at gmail.com (Michael Shadle) Date: Wed, 15 Feb 2012 09:59:18 -0800 Subject: I patched the spnego module In-Reply-To: <20120215074551.GQ23688@Debian-60-squeeze-64-minimal> References: <20120215074551.GQ23688@Debian-60-squeeze-64-minimal> Message-ID: On Tue, Feb 14, 2012 at 11:45 PM, Sven Ludwig wrote: > Hi, > > i only want to announce, that i patched the spnego module, so you can use it as > a 100% replacement to apache2+mod_auth_kerb. > > https://github.com/muhgatus/spnego-http-auth-nginx-module > > I installed in on 3 hosts running now for about 2 weeks without an error. > Perhaps somebody wants to test it too? :) Was this based off off of my work I funded? https://github.com/mike503/spnego-http-auth-nginx-module I just got a blog reply a couple days ago saying someone had got it working, which was the first time I had got a positive reply (since my corporate network seems so complex I can't test it on my own still...) http://michaelshadle.com/2010/01/17/spnego-for-nginx-a-start-at-least It would be great to have a working module for this, there was interest in it when I brought it up a couple years ago, but then nobody helped extend or test it :) I would love to talk offline, maybe there is something you're doing vs. mine (I didn't code it, only sponsor it) but obviously I want the most mature and extensible module out there. >From my understanding if Kerberos fails it's supposed to fall back to digest, but nginx did not support digest yet (only basic), someone once summarized it for me and it is in my notes somewhere. From ne at vbart.ru Wed Feb 15 18:08:22 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 15 Feb 2012 22:08:22 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: Message-ID: <201202152208.22484.ne@vbart.ru> On Wednesday 15 February 2012 21:32:30 adam estes wrote: > I'm trying to setup Django through UWSGI using Nginx. > > I got the UWSGI pass to work using this function > > location / { > include uwsgi_params; > uwsgi_pass 127.0.0.1:9001; > } > Unfortunately when I visit /django/admin. I get an error > > Page not found (404) Request Method: GET Request URL: > http://69.x.x.x/django/admin Using the URLconf defined in Django.urls, > Django tried these URL patterns, in this order: ^admin/ > > How can I have nginx rewrite the url to not pass the /django part? You can use "rewrite" directive: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite Or regexp "location" with captures and correspond uwsgi_param setting. http://nginx.org/en/docs/http/ngx_http_core_module.html#location i.e.: location /django/ { location ~ ^/django(?P/admin/.+)$ { uwsgi_param PATH_INFO $adm_path; ... } } wbr, Valentin V. Bartenev From nginx at abraumhal.de Wed Feb 15 18:11:06 2012 From: nginx at abraumhal.de (Sven Ludwig) Date: Wed, 15 Feb 2012 19:11:06 +0100 Subject: I patched the spnego module In-Reply-To: References: <20120215074551.GQ23688@Debian-60-squeeze-64-minimal> Message-ID: <20120215181105.GS23688@Debian-60-squeeze-64-minimal> On 02-15 09:59, Michael Shadle wrote: > On Tue, Feb 14, 2012 at 11:45 PM, Sven Ludwig wrote: > > Hi, > > > > i only want to announce, that i patched the spnego module, so you can use it as > > a 100% replacement to apache2+mod_auth_kerb. > > > > https://github.com/muhgatus/spnego-http-auth-nginx-module > > > > I installed in on 3 hosts running now for about 2 weeks without an error. > > Perhaps somebody wants to test it too? :) > > Was this based off off of my work I funded? > https://github.com/mike503/spnego-http-auth-nginx-module yes, it is. > From my understanding if Kerberos fails it's supposed to fall back to > digest, but nginx did not support digest yet (only basic), someone > once summarized it for me and it is in my notes somewhere. i only added basic auth to it in order if the browser does not support negotiation then it will fall back to basic-auth and checks the given password against kerberos. i do not know how the get the digest auth working. basic auth was simple. i modified the code within a couple of hours. > It would be great to have a working module for this, there was > interest in it when I brought it up a couple years ago, but then > nobody helped extend or test it :) i started to replace apache installations by nginx. so, here is my test result: it works ;) > I would love to talk offline, maybe there is something you're doing > vs. mine (I didn't code it, only sponsor it) but obviously I want the > most mature and extensible module out there. so, you might merge my changes back to your repository on github. From wtymdjs at gmail.com Wed Feb 15 18:53:56 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 13:53:56 -0500 Subject: Rewriting Base URL when passing In-Reply-To: <201202152208.22484.ne@vbart.ru> References: <201202152208.22484.ne@vbart.ru> Message-ID: The problem is that its not going to be just /admin There are around 10 urls defined. I tried location ~ ^/django(?P.*?)$ { uwsgi_param PATH_INFO $django_path; uwsgi_pass 127.0.0.1:9001; include uwsgi_params; } Which according to RegSkinner, would match anything after /django But this did not work. Its still passing the django parth. What do I do? On Wed, Feb 15, 2012 at 1:08 PM, Valentin V. Bartenev wrote: > On Wednesday 15 February 2012 21:32:30 adam estes wrote: > > I'm trying to setup Django through UWSGI using Nginx. > > > > I got the UWSGI pass to work using this function > > > > location / { > > include uwsgi_params; > > uwsgi_pass 127.0.0.1:9001; > > } > > Unfortunately when I visit /django/admin. I get an error > > > > Page not found (404) Request Method: GET Request URL: > > http://69.x.x.x/django/admin Using the URLconf defined in Django.urls, > > Django tried these URL patterns, in this order: ^admin/ > > > > How can I have nginx rewrite the url to not pass the /django part? > > You can use "rewrite" directive: > > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite > > Or regexp "location" with captures and correspond uwsgi_param setting. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#location > > i.e.: > > location /django/ { > > location ~ ^/django(?P/admin/.+)$ { > uwsgi_param PATH_INFO $adm_path; > ... > } > > } > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Feb 15 19:49:02 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 15 Feb 2012 23:49:02 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202152208.22484.ne@vbart.ru> Message-ID: <201202152349.02683.ne@vbart.ru> On Wednesday 15 February 2012 22:53:56 adam estes wrote: > The problem is that its not going to be just /admin > > There are around 10 urls defined. > > I tried > > location ~ ^/django(?P.*?)$ { > uwsgi_param PATH_INFO $django_path; > uwsgi_pass 127.0.0.1:9001; > include uwsgi_params; > } > > > Which according to RegSkinner, would match anything after /django > > But this did not work. Its still passing the django parth. What do I do? > Have you looked at your "uwsgi_params" file which you include in the location? So I expect that it sets one more PATH_INFO param to different value. wbr, Valentin V. Bartenev From wtymdjs at gmail.com Wed Feb 15 20:06:06 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 15:06:06 -0500 Subject: Rewriting Base URL when passing In-Reply-To: <201202152349.02683.ne@vbart.ru> References: <201202152208.22484.ne@vbart.ru> <201202152349.02683.ne@vbart.ru> Message-ID: I removed that from both the uwsgi_params and uwsgi_params.default. Still isn't working. On Wed, Feb 15, 2012 at 2:49 PM, Valentin V. Bartenev wrote: > On Wednesday 15 February 2012 22:53:56 adam estes wrote: > > The problem is that its not going to be just /admin > > > > There are around 10 urls defined. > > > > I tried > > > > location ~ ^/django(?P.*?)$ { > > uwsgi_param PATH_INFO $django_path; > > uwsgi_pass 127.0.0.1:9001; > > include uwsgi_params; > > } > > > > > > Which according to RegSkinner, would match anything after /django > > > > But this did not work. Its still passing the django parth. What do I do? > > > > Have you looked at your "uwsgi_params" file which you include in the > location? > So I expect that it sets one more PATH_INFO param to different value. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wtymdjs at gmail.com Wed Feb 15 20:15:51 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 15:15:51 -0500 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202152208.22484.ne@vbart.ru> <201202152349.02683.ne@vbart.ru> Message-ID: It seems to actually be rewriting the url now. The issue is with how its doing it when I visit 69.x.x.x./django/admin/ it rewrites it to 69.x.x.x/admin/ which is then processed by nginx again I'm guessing because it loads the IPB admin folder, and not the django admin url like it should if it was being handled by django. On Wed, Feb 15, 2012 at 3:06 PM, adam estes wrote: > I removed that from both the uwsgi_params and uwsgi_params.default. Still > isn't working. > > > On Wed, Feb 15, 2012 at 2:49 PM, Valentin V. Bartenev wrote: > >> On Wednesday 15 February 2012 22:53:56 adam estes wrote: >> > The problem is that its not going to be just /admin >> > >> > There are around 10 urls defined. >> > >> > I tried >> > >> > location ~ ^/django(?P.*?)$ { >> > uwsgi_param PATH_INFO $django_path; >> > uwsgi_pass 127.0.0.1:9001; >> > include uwsgi_params; >> > } >> > >> > >> > Which according to RegSkinner, would match anything after /django >> > >> > But this did not work. Its still passing the django parth. What do I do? >> > >> >> Have you looked at your "uwsgi_params" file which you include in the >> location? >> So I expect that it sets one more PATH_INFO param to different value. >> >> wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ne at vbart.ru Wed Feb 15 20:26:55 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 00:26:55 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202152349.02683.ne@vbart.ru> Message-ID: <201202160026.55826.ne@vbart.ru> On Thursday 16 February 2012 00:06:06 adam estes wrote: > I removed that from both the uwsgi_params and uwsgi_params.default. Still > isn't working. > Did you reload your nginx? Are you sure that it's not your browser cache? Could you provide debug log? http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From ne at vbart.ru Wed Feb 15 20:35:08 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 00:35:08 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: Message-ID: <201202160035.09016.ne@vbart.ru> On Thursday 16 February 2012 00:15:51 adam estes wrote: > It seems to actually be rewriting the url now. The issue is with how its > doing it > > when I visit 69.x.x.x./django/admin/ > > it rewrites it to 69.x.x.x/admin/ > > which is then processed by nginx again I'm guessing because it loads the > IPB admin folder, and not the django admin url like it should if it was > being handled by django. > If you didn't set any "rewrite" then nginx doesn't rewrite url and it doesn't process it again. The "location" and "uwsgi_param" directives can't do that. Could you show your full config? wbr, Valentin V. Bartenev From wtymdjs at gmail.com Wed Feb 15 20:40:56 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 15:40:56 -0500 Subject: Rewriting Base URL when passing In-Reply-To: <201202160035.09016.ne@vbart.ru> References: <201202160035.09016.ne@vbart.ru> Message-ID: worker_processes 1; user nginx nginx; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; # I have to set min length to 0 and http version to 1.0 or it won't compress # the XML-RPC (SCGI) responses. Those responses can be quite large if you're # using many torrent files. gzip on; gzip_min_length 0; gzip_http_version 1.0; gzip_types text/plain text/xml application/xml application/json text/css application/x-javascript text/javascript$ server { listen 80; #error_log /var/log/nginx/error.log error; server_name localhost; location ~ /\.ht { deny all; } location ~ /\.svn { deny all; } location / { root /home/sites/forum/; index index.php index.html index.htm; } location ~ \.php$ { root "/home/sites/forum/"; fastcgi_pass unix:/etc/phpcgi/php-cgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ ^/django(?P.*?)$ { uwsgi_param PATH_INFO $django_path; uwsgi_pass 127.0.0.1:9001; include uwsgi_params; } # location ~ ^/RPC00001$ { # include scgi_params; # scgi_pass unix:/home/rtorrent/rtorrent/session/rpc.socket; # auth_basic "idk"; # auth_basic_user_file "/usr/local/nginx/rutorrent_passwd_rtorrent"; # } } server { listen 443; server_name localhost; auth_basic "My ruTorrent web site"; auth_basic_user_file "/usr/local/nginx/rutorrent_passwd"; ssl on; ssl_certificate /usr/local/nginx/rutorrent.pem; ssl_certificate_key /usr/local/nginx/rutorrent.pem; location ~ ^/rutorrent/(?:share|conf) { deny all; } location ~ /\.ht { deny all; } location / { root /var/rutorrent; index index.php index.html index.htm; } location ~ \.php$ { root "/var/rutorrent"; fastcgi_pass unix:/etc/phpcgi/php-cgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # location ~ ^/RPC00001$ { # include scgi_params; # scgi_pass unix:/home/rtorrent/rtorrent/session/rpc.socket; # auth_basic "My ruTorrent web site"; # auth_basic_user_file "/usr/local/nginx/rutorrent_passwd_rtorrent"; # } } } And I don't think it was compiled with debugging support :| On Wed, Feb 15, 2012 at 3:35 PM, Valentin V. Bartenev wrote: > On Thursday 16 February 2012 00:15:51 adam estes wrote: > > It seems to actually be rewriting the url now. The issue is with how its > > doing it > > > > when I visit 69.x.x.x./django/admin/ > > > > it rewrites it to 69.x.x.x/admin/ > > > > which is then processed by nginx again I'm guessing because it loads the > > IPB admin folder, and not the django admin url like it should if it was > > being handled by django. > > > > If you didn't set any "rewrite" then nginx doesn't rewrite url and it > doesn't > process it again. The "location" and "uwsgi_param" directives can't do > that. > > Could you show your full config? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wtymdjs at gmail.com Wed Feb 15 20:42:39 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 15:42:39 -0500 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202160035.09016.ne@vbart.ru> Message-ID: If I visit 69.x.x.x/django The error page has changed to Page not found?(404) Request Method:GET Request URL:http://69.x.x.x/ When it used to say the request url was 69.x.x.x/django On Wed, Feb 15, 2012 at 3:40 PM, adam estes wrote: > > worker_processes 1; > user nginx nginx; > > events { > ? ? ? ? worker_connections 1024; > } > > http { > ? ? ? ? include mime.types; > ? ? ? ? default_type application/octet-stream; > ? ? ? ? sendfile on; > ? ? ? ? keepalive_timeout 65; > > ? ? ? ? # I have to set min length to 0 and http version to 1.0 or it won't compress > ? ? ? ? # the XML-RPC (SCGI) responses. Those responses can be quite large if you're > ? ? ? ? # using many torrent files. > ? ? ? ? gzip on; > ? ? ? ? gzip_min_length 0; > ? ? ? ? gzip_http_version 1.0; > ? ? ? ? gzip_types text/plain text/xml application/xml application/json text/css application/x-javascript text/javascript$ > > > ? ? ? ? server { > ? ? ? ? ? ? ? ? listen 80; > ? ? ? ? ? ? ? ? #error_log ? /var/log/nginx/error.log error; > ? ? ? ? ? ? ? ? server_name localhost; > > ? ? ? ? ? ? ? ? location ~ /\.ht { > ? ? ? ? ? ? ? ? ? ? ? ? deny all; > ? ? ? ? ? ? ? ? } > > ? ? ? ? ? ? ? ? location ~ /\.svn { > ? ? ? ? ? ? ? ? ? ? ? ? deny all; > ? ? ? ? ? ? ? ? } > > ? ? ? ? ? ? ? ? location / { > ? ? ? ? ? ? ? ? ? ? ? ? root /home/sites/forum/; > ? ? ? ? ? ? ? ? ? ? ? ? index index.php index.html index.htm; > ? ? ? ? ? ? ? ? } > ? location ~ \.php$ { > ? ? ? ? ? ? ? ? ? ? ? ? root "/home/sites/forum/"; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_pass unix:/etc/phpcgi/php-cgi.socket; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_index index.php; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > ? ? ? ? ? ? ? ? ? ? ? ? include fastcgi_params; > ? ? ? ? ? ? ? ? } > ? ? ? ? ? ? ? ? location ~ ^/django(?P.*?)$ { > ? ? ? ? ? ? ? ? ? ? ? ? uwsgi_param PATH_INFO $django_path; > ? ? ? ? ? ? ? ? ? ? ? ? uwsgi_pass 127.0.0.1:9001; > ? ? ? ? ? ? ? ? ? ? ? ? include uwsgi_params; > ? ? ? ? ? ? ? ? } > > > # ? ? ? ? ? ? ? location ~ ^/RPC00001$ { > # ? ? ? ? ? ? ? ? ? ? ? include scgi_params; > # ? ? ? ? ? ? ? ? ? ? ? scgi_pass unix:/home/rtorrent/rtorrent/session/rpc.socket; > # ? ? ? ? ? ? ? ? ? ? ? auth_basic "idk"; > # ? ? ? ? ? ? ? ? ? ? ? auth_basic_user_file "/usr/local/nginx/rutorrent_passwd_rtorrent"; > # ? ? ? ? ? ? ? } > ? ? ? ? } > ? ? ? ? server { > ? ? ? ? ? ? ? ? listen 443; > ? ? ? ? ? ? ? ? server_name localhost; > ? ? ? ? ? ? ? ? auth_basic "My ruTorrent web site"; > ? ? ? ? ? ? ? ? auth_basic_user_file "/usr/local/nginx/rutorrent_passwd"; > > ? ? ? ? ? ? ? ? ssl on; > ? ? ? ? ? ? ? ? ssl_certificate /usr/local/nginx/rutorrent.pem; > ? ? ? ? ? ? ? ? ssl_certificate_key /usr/local/nginx/rutorrent.pem; > > ? ? ? ? ? ? ? ? location ~ ^/rutorrent/(?:share|conf) { > ? ? ? ? ? ? ? ? ? ? ? ? deny all; > ? ? ? ? ? ? ? ? } > > ? ? ? ? ? ? ? ? location ~ /\.ht { > ? ? ? ? ? ? ? ? ? ? ? ? deny all; > } > > ? ? ? ? ? ? ? ? location / { > ? ? ? ? ? ? ? ? ? ? ? ? root /var/rutorrent; > ? ? ? ? ? ? ? ? ? ? ? ? index index.php index.html index.htm; > ? ? ? ? ? ? ? ? } > > ? ? ? ? ? ? ? ? location ~ \.php$ { > ? ? ? ? ? ? ? ? ? ? ? ? root "/var/rutorrent"; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_pass unix:/etc/phpcgi/php-cgi.socket; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_index index.php; > ? ? ? ? ? ? ? ? ? ? ? ? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > ? ? ? ? ? ? ? ? ? ? ? ? include fastcgi_params; > ? ? ? ? ? ? ? ? } > > # ? ? ? ? ? ? ? location ~ ^/RPC00001$ { > # ? ? ? ? ? ? ? ? ? ? ? include scgi_params; > # ? ? ? ? ? ? ? ? ? ? ? scgi_pass unix:/home/rtorrent/rtorrent/session/rpc.socket; > # ? ? ? ? ? ? ? ? ? ? ? auth_basic "My ruTorrent web site"; > # ? ? ? ? ? ? ? ? ? ? ? auth_basic_user_file "/usr/local/nginx/rutorrent_passwd_rtorrent"; > # ? ? ? ? ? ? ? } > ? ? ? ? } > } > > And I don't think it was compiled with debugging support :| > > On Wed, Feb 15, 2012 at 3:35 PM, Valentin V. Bartenev wrote: >> >> On Thursday 16 February 2012 00:15:51 adam estes wrote: >> > It seems to actually be rewriting the url now. The issue is with how its >> > doing it >> > >> > when I visit 69.x.x.x./django/admin/ >> > >> > it rewrites it to 69.x.x.x/admin/ >> > >> > which is then processed by nginx again I'm guessing because it loads the >> > IPB admin folder, and not the django admin url like it should if it was >> > being handled by django. >> > >> >> If you didn't set any "rewrite" then nginx doesn't rewrite url and it doesn't >> process it again. The "location" and "uwsgi_param" directives can't do that. >> >> Could you show your full config? >> >> ?wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > From ne at vbart.ru Wed Feb 15 21:18:44 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 01:18:44 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: Message-ID: <201202160118.44957.ne@vbart.ru> On Thursday 16 February 2012 00:42:39 adam estes wrote: > If I visit 69.x.x.x/django > > The error page has changed to > > Page not found (404) > > Request Method:GET > Request URL:http://69.x.x.x/ > It's what your django says. And "Request URL" seems to be correct. wbr, Valentin V. Bartenev From wtymdjs at gmail.com Wed Feb 15 21:23:48 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 16:23:48 -0500 Subject: Rewriting Base URL when passing In-Reply-To: <201202160118.44957.ne@vbart.ru> References: <201202160118.44957.ne@vbart.ru> Message-ID: Whenever I type /django/admin The url changes to /admin and the ipb admin login is displayed (Because admin is in the root folder, not in the django folder. if I type /django/admin/ The django admin page is displayed, but clicking login changes it to /admin and the ipb login page is displayed. On Wed, Feb 15, 2012 at 4:18 PM, Valentin V. Bartenev wrote: > On Thursday 16 February 2012 00:42:39 adam estes wrote: >> If I visit 69.x.x.x/django >> >> The error page has changed to >> >> Page not found (404) >> >> Request Method:GET >> Request URL:http://69.x.x.x/ >> > > It's what your django says. And "Request URL" seems to be correct. > > ?wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ne at vbart.ru Wed Feb 15 21:38:39 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 01:38:39 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202160118.44957.ne@vbart.ru> Message-ID: <201202160138.39597.ne@vbart.ru> On Thursday 16 February 2012 01:23:48 adam estes wrote: > Whenever I type /django/admin > > The url changes to /admin and the ipb admin login is displayed > (Because admin is in the root folder, not in the django folder. > > if I type /django/admin/ > > The django admin page is displayed, but clicking login changes it to > /admin and the ipb login page is displayed. > Hm... I see. But what do you expect? Now, your django doesn't know about "/django/" prefix in your url and naturally generates all links without it. When you click to "/admin" - your browser goes to "/admin" and nginx receives a request to "/admin" (not /django/admin). wbr, Valentin V. Bartenev From wtymdjs at gmail.com Wed Feb 15 21:40:56 2012 From: wtymdjs at gmail.com (adam estes) Date: Wed, 15 Feb 2012 16:40:56 -0500 Subject: Rewriting Base URL when passing In-Reply-To: <201202160138.39597.ne@vbart.ru> References: <201202160118.44957.ne@vbart.ru> <201202160138.39597.ne@vbart.ru> Message-ID: Hmm. then that wouldn't resolve my problem. How can I host my django installation on another port? I tried creating a new server object and just added in listen 8000 and the location thing, and it didn't respond to anything I sent on that port, despite linux saying nginx was bound to that port. On Wed, Feb 15, 2012 at 4:38 PM, Valentin V. Bartenev wrote: > On Thursday 16 February 2012 01:23:48 adam estes wrote: >> Whenever I type /django/admin >> >> The url changes to /admin and the ipb admin login is displayed >> (Because admin is in the root folder, not in the django folder. >> >> if I type /django/admin/ >> >> The django admin page is displayed, but clicking login changes it to >> /admin and the ipb login page is displayed. >> > > Hm... I see. But what do you expect? Now, your django doesn't know about > "/django/" prefix in your url and naturally generates all links without it. > > When you click to "/admin" - your browser goes to "/admin" and nginx receives a > request to "/admin" (not /django/admin). > > > ?wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ne at vbart.ru Wed Feb 15 22:13:26 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 02:13:26 +0400 Subject: Rewriting Base URL when passing In-Reply-To: References: <201202160138.39597.ne@vbart.ru> Message-ID: <201202160213.26898.ne@vbart.ru> On Thursday 16 February 2012 01:40:56 adam estes wrote: > Hmm. then that wouldn't resolve my problem. You can try to set django.root to "/django/" (it's "/" by default) in your django settings or uwsgi deploy .py script, so probably you don't need any magic with urls. > How can I host my django installation on another port? > > I tried creating a new server object and just added in listen 8000 and > the location thing, and it didn't respond to anything I sent on that > port, despite linux saying nginx was bound to that port. > It's strange. Firewall? Bound to wrong interface? wbr, Valentin V. Bartenev From ne at vbart.ru Wed Feb 15 22:57:02 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 16 Feb 2012 02:57:02 +0400 Subject: Rewriting Base URL when passing In-Reply-To: <201202160213.26898.ne@vbart.ru> References: <201202160213.26898.ne@vbart.ru> Message-ID: <201202160257.02332.ne@vbart.ru> On Thursday 16 February 2012 02:13:26 Valentin V. Bartenev wrote: > On Thursday 16 February 2012 01:40:56 adam estes wrote: > > Hmm. then that wouldn't resolve my problem. > > You can try to set django.root to "/django/" (it's "/" by default) in your > django settings or uwsgi deploy .py script, so probably you don't need any > magic with urls. > Or, probably: uwsgi_param PATH_INFO $document_uri; uwsgi_param SCRIPT_NAME /django; ... wbr, Valentin V. Bartenev From nginxyz at mail.ru Thu Feb 16 04:16:03 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 16 Feb 2012 08:16:03 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: <20120215144958.GQ67687@mdounin.ru> References: <20120215144958.GQ67687@mdounin.ru> Message-ID: 15 ??????? 2012, 18:50 ?? Maxim Dounin : > Hello! > > On Wed, Feb 15, 2012 at 08:56:49AM -0500, Maxim Khitrov wrote: > > > Hello Maxim, > > > > Back in 2010 you wrote that it's not likely that your > > http_auth_request_module would make it into nginx core. I'm curious if > > anything has changed over the past two years? > > > > It's not that compiling this module into nginx is a problem > > (especially on FreeBSD), but I think a lot of people are inherently > > weary of depending on 3rd-party modules, since there is no guarantee > > of continued support. > > > > What do you think about adding your module to the main nginx repository? > > There are no immediate plans, but this may happen somewhere in the > future. Hello fellow Maxims and others, I took a closer look at the auth_request module source code today and realized that I was partially wrong about auth_request authorization subrequests causing the entire requested file to be retrieved from the backend server. I apologize for the confusion my posts may have caused. Due to sr->header_only being set to 1, the connection to the backend server is terminated from within ngx_http_upstream_send_response() as soon as the HTTP request status code is received. So the entire file is (usually) not retrieved, but due to the fact that the connection is prematurely closed, it may take a while for the backend server to actually detect that the frontend server has closed the connection and that it should stop sending. I have done some testing to see what kind of overhead this kind of abrupt connection termination causes, and according to my test results, auth_request module authorization requests always generate at least 65 kb worth of traffic (including the initial TCP three-way handshake and the final four-way FIN/ACK connection termination packets). Under heavy load, the amount of traffic generated mainly by the backend server while sending the requested file in vain exceeded 500 kb, so it's obvious that this might lead to significant overhead when large files are sent in response to auth_request authorization subrequests. Moreover, each file on the backend server that is accessed this way has to be opened and prepared for sending, which causes disk overhead and buffer allocation because the backend server treats those requests just like all the other GET method requests. Under heavy load, this overhead can be significant. Another minor problem is that logs on the backend server tend to get filled up with constant error messages about connections being closed prematurely, writev() failures etc. All of these issues can be avoided simply by using HEAD method requests for authorization subrequests. According to my test results, HEAD method authorization subrequests generate no more than 1310 bytes worth of traffic (including the initial TCP three-way handshake and the final four-way FIN/ACK connection termination packets). GET method authorization subrequests, on the other hand, generate at best 50 times and at worst 400 times more traffic than HEAD method requests, so files smaller than 64 kb actually do get retrieved twice from the backend server on each request, as well as files smaller than 200 kb under heavy load. HEAD method request responses, on the other hand, remain the same size even under heavy load, and the upstream / backend server always immediately terminates the connection on its side after sending its (usually) single-packet response, so there is also no additional disk overhead, only the size of the requested file is checked, and no additional buffer allocation is done, while sr->header_only on the frontend server can be safely set to 0, which also makes it possible for the auth_request module to be extended to make the use of the proxy_cache and proxy_store directives possible even for authorization requests because completed upstream requests would no longer cause response-buffering temporary files to be closed. Since HEAD method requests also go through the same access phase checking as the GET method requests, they are also a valid means of checking whether an actual GET method request would be allowed, unless different responses are configured for each method. The best thing about it all is that you can make the auth_request module use HEAD method authorization requests by adding the following directive to the auth_request authorization subrequest location block: proxy_method HEAD; I have also modified the auth_request module to use HEAD method authorization subrequests by default. This setting can be overridden in the configuration file by using the proxy_method directive, of course. You can find my auth_request module patch here: https://nginxyzpro.berlios.de/patch-head.ngx_http_auth_request_module.c.20120215.diff SIZE (patch-head.ngx_http_auth_request_module.c.20120215.diff) = 5196 bytes SHA256 (patch-head.ngx_http_auth_request_module.c.20120215.diff) = \ 6d163ec9e11a06bcadd8395042e0a6ef1dc2dfbe0bbfab4cb9d0c4e73e373f75 Any comments will be appreciated. Max From nginx-forum at nginx.us Thu Feb 16 05:33:50 2012 From: nginx-forum at nginx.us (LetsPlay) Date: Thu, 16 Feb 2012 00:33:50 -0500 (EST) Subject: Help with debugging - stall on response size is a symptom but what is the cause? In-Reply-To: <8758fff093b4402245650ec5f4d3c453.NginxMailingListEnglish@forum.nginx.org> References: <8758fff093b4402245650ec5f4d3c453.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7f0a064c2f36be2bdc9a0876b36cee99.NginxMailingListEnglish@forum.nginx.org> Maxim, That looks like a really interesting suggestion, thank you, I would have had no idea. When I have figured out how to do that with my setup I'll report back... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222240,222470#msg-222470 From maxim at nginx.com Thu Feb 16 07:38:44 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 16 Feb 2012 11:38:44 +0400 Subject: nginx-1.1.15 In-Reply-To: <20120215163940.GB8483@lo0.su> References: <20120215144020.GM67687@mdounin.ru> <20120215171049.0dc89e29@suse2.iptech.internal> <20120215163940.GB8483@lo0.su> Message-ID: <4F3CB284.9070704@nginx.com> On 2/15/12 8:39 PM, Ruslan Ermilov wrote: > On Wed, Feb 15, 2012 at 05:10:49PM +0100, Rainer Duffner wrote: >> Am Wed, 15 Feb 2012 18:40:20 +0400 >> schrieb Maxim Dounin: >> >>> *) Feature: the "proxy_cookie_domain" and "proxy_cookie_path" >>> directives. >> >> >> Hi, >> >> is there documentation for all the new features of nginx 1.1? > > Should appear on the web site today or tomorrow. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/ From nginxyz at mail.ru Thu Feb 16 09:08:58 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Thu, 16 Feb 2012 13:08:58 +0400 Subject: Bug report: missing SCARCE string in nginx-1.1.15/src/http/ngx_http_file_cache.c [patch] Message-ID: Hello, the value of the NGX_HTTP_CACHE_SCARCE cache status is defined in nginx-1.1.15/src/http/ngx_http_cache.h, but unlike the other cache status strings, it's missing from nginx-1.1.15/src/http/ngx_http_file_cache.c. The function ngx_http_upstream_cache_status() in nginx-1.1.15/src/http/ngx_http_upstream.c references the status strings directly as ngx_http_cache_status[n].len, so with the SCARCE cache status string missing, this is a segmentation violation waiting to happen. Here's the patch to fix the problem: --- src/http/ngx_http_file_cache.c.orig 2012-02-16 00:18:21.000000000 -0800 +++ src/http/ngx_http_file_cache.c 2012-02-16 00:25:00.000000000 -0800 @@ -53,7 +53,8 @@ ngx_string("EXPIRED"), ngx_string("STALE"), ngx_string("UPDATING"), - ngx_string("HIT") + ngx_string("HIT"), + ngx_string("SCARCE") }; --- Max From mdounin at mdounin.ru Thu Feb 16 10:07:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Feb 2012 14:07:58 +0400 Subject: Bug report: missing SCARCE string in nginx-1.1.15/src/http/ngx_http_file_cache.c [patch] In-Reply-To: References: Message-ID: <20120216100758.GX67687@mdounin.ru> Hello! On Thu, Feb 16, 2012 at 01:08:58PM +0400, Max wrote: > > Hello, > > the value of the NGX_HTTP_CACHE_SCARCE cache status is defined in > nginx-1.1.15/src/http/ngx_http_cache.h, but unlike the other cache > status strings, it's missing from nginx-1.1.15/src/http/ngx_http_file_cache.c. > > The function ngx_http_upstream_cache_status() in > nginx-1.1.15/src/http/ngx_http_upstream.c references the status > strings directly as ngx_http_cache_status[n].len, so with the > SCARCE cache status string missing, this is a segmentation violation > waiting to happen. > > Here's the patch to fix the problem: > > > --- src/http/ngx_http_file_cache.c.orig 2012-02-16 00:18:21.000000000 -0800 > +++ src/http/ngx_http_file_cache.c 2012-02-16 00:25:00.000000000 -0800 > @@ -53,7 +53,8 @@ > ngx_string("EXPIRED"), > ngx_string("STALE"), > ngx_string("UPDATING"), > - ngx_string("HIT") > + ngx_string("HIT"), > + ngx_string("SCARCE") > }; > --- The NGX_HTTP_CACHE_SCARCE value can't appear in u->cache_status, and hence there is no real problem. It's a special value used by cache to inform upstream that there is no cached response (i.e. MISS cache status) and cacheing should be enabled due to min_uses preventing it. Maxim Dounin From piotr.sikora at frickle.com Thu Feb 16 12:38:04 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Thu, 16 Feb 2012 13:38:04 +0100 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: References: <20120215144958.GQ67687@mdounin.ru> Message-ID: Max, please keep the discussion in single thread. > Any comments will be appreciated. You're delusional, don't try to fix things that you don't understand. Your whole reasoning is based on a fact that you think that authorization subrequest fetches the protected file/page which client wants to access, but that's not the case. Authorization subrequest should access special authorization endpoint (or database or anything else you can think of) and either grant access and let the main request access the protected file/page or not. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From mdounin at mdounin.ru Thu Feb 16 16:07:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Feb 2012 20:07:18 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: References: <20120215144958.GQ67687@mdounin.ru> Message-ID: <20120216160718.GA67687@mdounin.ru> Hello! On Thu, Feb 16, 2012 at 08:16:03AM +0400, Max wrote: > > 15 ??????? 2012, 18:50 ?? Maxim Dounin : > > Hello! > > > > On Wed, Feb 15, 2012 at 08:56:49AM -0500, Maxim Khitrov wrote: > > > > > Hello Maxim, > > > > > > Back in 2010 you wrote that it's not likely that your > > > http_auth_request_module would make it into nginx core. I'm curious if > > > anything has changed over the past two years? > > > > > > It's not that compiling this module into nginx is a problem > > > (especially on FreeBSD), but I think a lot of people are inherently > > > weary of depending on 3rd-party modules, since there is no guarantee > > > of continued support. > > > > > > What do you think about adding your module to the main nginx repository? > > > > There are no immediate plans, but this may happen somewhere in the > > future. > > Hello fellow Maxims and others, > > I took a closer look at the auth_request module source code today and > realized that I was partially wrong about auth_request authorization > subrequests causing the entire requested file to be retrieved from the > backend server. I apologize for the confusion my posts may have > caused. Due to sr->header_only being set to 1, the connection to the > backend server is terminated from within ngx_http_upstream_send_response() > as soon as the HTTP request status code is received. Yes. This is basically a workaround for cases when people unintentionally return data to auth subrequest, it makes sure that no unexpected data are sent to client in any case. [...] > All of these issues can be avoided simply by using HEAD method > requests for authorization subrequests. According to my Using HEAD is not an option in auth_request itself, as it doesn't know how auth subrequest will be handled. E.g. it may be passed to fastcgi, or even hit static file. If you handle auth subrequests with proxy_pass, you may use proxy_set_method to issue HEAD requests to backend. Or you may use correct auth endpoint which doesn't return unneeded data. [...] > I have also modified the auth_request module to use HEAD method > authorization subrequests by default. This setting can be > overridden in the configuration file by using the proxy_method > directive, of course. > > You can find my auth_request module patch here: > > https://nginxyzpro.berlios.de/patch-head.ngx_http_auth_request_module.c.20120215.diff The patch is wrong by design, see above. Moreover, it makes it impossible to correctly pass original request method to auth endpoint. Maxim Dounin From nginx-forum at nginx.us Thu Feb 16 16:52:43 2012 From: nginx-forum at nginx.us (fish_ka_praha) Date: Thu, 16 Feb 2012 11:52:43 -0500 (EST) Subject: Memory leaks nginx with lua memc upstream_keepalive modules/ Message-ID: [root at VKRT083 /home/fish]# uname -a FreeBSD VKRT083.local 8.2-RELEASE-p3 FreeBSD 8.2-RELEASE-p3 #0: Tue Sep 27 18:45:57 UTC 2011 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 [root at VKRT083 /home/fish]# /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.0.12 built by gcc 4.2.1 20070719 [FreeBSD] configure arguments: --add-module=../chaoslawful-lua-nginx-module-7cb4e09/ --add-module=../ngx_http_upstream_keepalive-d9ac9ad67f45 --add-module=../simpl-ngx_devel_kit-24202b4/ --add-module=../agentzh-memc-nginx-module-4007350/ [root at VKRT083 /home/fish]# ps axuww|grep ngi -- skip some strings -- root 14076 0.0 0.0 13488 2620 ?? Is 12:45PM 0:00.02 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/new.conf (nginx) fish 14077 0.0 0.0 29872 7624 ?? S 12:45PM 1:31.10 nginx: worker process (nginx) fish 14078 0.0 9.1 2702512 2287312 ?? S 12:45PM 1:56.73 nginx: worker process (nginx) fish 14080 0.0 0.0 29872 7772 ?? S 12:45PM 1:30.19 nginx: worker process (nginx) fish 14081 0.0 8.6 2622640 2173000 ?? S 12:45PM 1:52.28 nginx: worker process (nginx) fish 14082 0.0 0.0 31920 9888 ?? S 12:45PM 1:40.60 nginx: worker process (nginx) fish 14083 0.0 0.0 31920 9948 ?? S 12:45PM 1:38.71 nginx: worker process (nginx) fish 14085 0.0 0.0 29872 7640 ?? S 12:45PM 1:48.66 nginx: worker process (nginx) fish 14086 0.0 5.9 1520816 1480176 ?? S 12:45PM 1:55.11 nginx: worker process (nginx) fish 14087 0.0 0.0 31920 9864 ?? S 12:45PM 1:36.69 nginx: worker process (nginx) fish 14089 0.0 0.0 29872 7752 ?? S 12:45PM 1:44.56 nginx: worker process (nginx) fish 14091 0.0 0.0 29872 6148 ?? S 12:45PM 1:47.95 nginx: worker process (nginx) fish 14094 0.0 0.0 29872 7200 ?? S 12:45PM 1:44.58 nginx: worker process (nginx) fish 14099 0.0 14.4 4895920 3620924 ?? S 12:45PM 1:50.18 nginx: worker process (nginx) fish 14103 0.0 0.0 31920 8576 ?? S 12:45PM 1:40.24 nginx: worker process (nginx) fish 14104 0.0 0.0 29872 7864 ?? S 12:45PM 1:45.71 nginx: worker process (nginx) fish 14106 0.0 0.0 29872 7824 ?? S 12:45PM 1:36.71 nginx: worker process (nginx) fish 14107 0.0 0.0 29872 6752 ?? S 12:45PM 1:46.78 nginx: worker process (nginx) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222489,222489#msg-222489 From nginx-forum at nginx.us Thu Feb 16 16:54:04 2012 From: nginx-forum at nginx.us (fish_ka_praha) Date: Thu, 16 Feb 2012 11:54:04 -0500 (EST) Subject: Memory leaks nginx with lua memc upstream_keepalive modules/ In-Reply-To: References: Message-ID: <7815e1a9ea0b51b185d8771755cdcafa.NginxMailingListEnglish@forum.nginx.org> head of nginx.conf [root at VKRT083 /home/fish]# cat /usr/local/nginx/conf/new.conf user fish; worker_processes 32; pid logs/nginxnew.pid; #worker_priority -5; worker_rlimit_nofile 32000; events { worker_connections 32000; use kqueue; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_names_hash_bucket_size 512; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_comp_level 1; gzip_min_length 1000; gzip_proxied any; gzip_disable "MSIE [1-6]\."; gzip_types application/x-javascript text/plain application/xml; server_tokens off; keepalive_timeout 15; ignore_invalid_headers on; reset_timedout_connection on; upstream backend {server 192.168.10.5:11211; keepalive 512 single;} upstream tycoon0 {server 192.168.10.6:11111; keepalive 512 single;} upstream tycoon1 {server 192.168.10.7:11111; keepalive 256 single;} Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222489,222490#msg-222490 From batistuta.ar at hush.com Thu Feb 16 17:55:36 2012 From: batistuta.ar at hush.com (batistuta) Date: Thu, 16 Feb 2012 17:55:36 +0000 Subject: geoip+php Message-ID: <20120216175536.3F4EC10E2D8@smtp.hushmail.com> Hi. I'd want to secure some locations with basic GeoIP like this: /etc/nginx/sites-enabled/default_server upstream error { server localhost:9222; } server { root /var/www/htdocs; server_name 10.10.10.10; listen 80; include /etc/nginx/fastcgi_php; location / { index index.php; } error_page 403 /403.html; location = /403.html { root /var/www/nginx-default; } location ^~ /test { index index.php; if ($geoip_country_code = AR) { fastcgi_pass unix:/var/run/www/php.sock; } if ($geoip_country_code != AR) { fastcgi_pass error; return 403; } fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } } It's working but images are not displaying. If I switch my location to ^~ /test.php$ my GeoIP stops working. :D Any suggestions? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Thu Feb 16 19:18:09 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 16 Feb 2012 23:18:09 +0400 Subject: nginx documentation status Message-ID: <20120216191809.GA96270@lo0.su> This is to let you all know that we've finished translating original Russian documentation into English, and now start the process of actively updating it to bring it up to date. So, please check this before you look anywhere else: http://nginx.org/en/docs/ From nginx-forum at nginx.us Thu Feb 16 21:57:37 2012 From: nginx-forum at nginx.us (white_gecko) Date: Thu, 16 Feb 2012 16:57:37 -0500 (EST) Subject: UserDir In-Reply-To: References: Message-ID: <82a22858fd94e0b455c2e88c2c416c72.NginxMailingListEnglish@forum.nginx.org> Hello, I have a very similar problem. I want to use userdirs as in apache and as described here: http://wiki.nginx.org/UserDir and use php with fast-cgi in these userdirs. I've tried it with this: location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; index index.html index.htm; autoindex on; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } But it get empty pages, when I try to call a php-file. How can I get php files with match this location to be forwarded to fast-cgi? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221861,222506#msg-222506 From ebade at mathbiol.org Thu Feb 16 22:04:42 2012 From: ebade at mathbiol.org (Bade Iriabho) Date: Thu, 16 Feb 2012 16:04:42 -0600 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files Message-ID: Hello, Following instructions on http://nginx.org/en/download.html, I tried to install NginX on CentOS 6 and it failed. See the three approaches I used below. First Try +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ $ rpm -Uvh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm Retrieving http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, key ID 7bd9bf62 error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed Second Try +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ $ nano /etc/yum.repos.d/CentOS-Nginx.repo # Type the following [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/6/$basearch/ gpgcheck=0 enabled=1 $ yum install nginx Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: dist1.800hosting.com * extras: centos.mirror.lstn.net * updates: mirror.raystedman.net base | 1.1 kB 00:00 c5-testing | 951 B 00:00 extras | 2.1 kB 00:00 nginx | 1.3 kB 00:00 nginx/primary | 2.6 kB 00:00 http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. nginx/primary | 2.6 kB 00:00 http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more mirrors to try. Third Try +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ $ wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm --2012-02-16 15:55:30-- http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm Resolving nginx.org... 206.251.255.63 Connecting to nginx.org|206.251.255.63|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 325076 (317K) [application/x-redhat-package-manager] Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' 100%[===================================================================================================================================================================================================>] 325,076 944K/s in 0.3s 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' saved [325076/325076] $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 signature: BAD, key ID 7bd9bf62 error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ So can someone look into this, I am not sure who to ask. Regards, Bade I. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giamteckchoon at gmail.com Thu Feb 16 22:17:55 2012 From: giamteckchoon at gmail.com (Teck Choon Giam) Date: Fri, 17 Feb 2012 06:17:55 +0800 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 6:04 AM, Bade Iriabho wrote: > Hello, > > Following instructions on http://nginx.org/en/download.html, I tried to > install NginX on CentOS 6 and it failed. See the three approaches I used > below. > > First Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ rpm -Uvh > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Retrieving > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, key ID > 7bd9bf62 > error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed Did you try: # rpm -vvv --rebuilddb # yum clean all # yum localinstall --nogpgcheck http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm # yum -y install nginx Thanks. Kindest regards, Giam Teck Choon > > Second Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ nano /etc/yum.repos.d/CentOS-Nginx.repo > # Type the following > [nginx] > name=nginx repo > baseurl=http://nginx.org/packages/centos/6/$basearch/ > gpgcheck=0 > enabled=1 > > $ yum install nginx > Loaded plugins: fastestmirror > Loading mirror speeds from cached hostfile > ?* base: dist1.800hosting.com > ?* extras: centos.mirror.lstn.net > ?* updates: mirror.raystedman.net > base > | 1.1 kB???? 00:00 > c5-testing > |? 951 B???? 00:00 > extras > | 2.1 kB???? 00:00 > nginx > | 1.3 kB???? 00:00 > nginx/primary > | 2.6 kB???? 00:00 > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno > -3] Error performing checksum > Trying other mirror. > nginx/primary > | 2.6 kB???? 00:00 > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno > -3] Error performing checksum > Trying other mirror. > Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more > mirrors to try. > > Third Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ wget > http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > --2012-02-16 15:55:30-- > http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > Resolving nginx.org... 206.251.255.63 > Connecting to nginx.org|206.251.255.63|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 325076 (317K) [application/x-redhat-package-manager] > Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' > > 100%[===================================================================================================================================================================================================>] > 325,076????? 944K/s?? in 0.3s > > 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' saved > [325076/325076] > > $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 signature: BAD, > key ID 7bd9bf62 > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > So can someone look into this, I am not sure who to ask. > > Regards, > Bade I. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Feb 16 23:10:04 2012 From: nginx-forum at nginx.us (justin) Date: Thu, 16 Feb 2012 18:10:04 -0500 (EST) Subject: Wordpress Permalinks Message-ID: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> Howdy. So I am trying to get permalinks to work with Wordpress. I have read a few articles/blog posts but still no luck. The permalink structure I am trying to use is: http://mydomain.com/wp/index.php/2012/02/sample-post/ Here is the configuration block that I am currently using that is important: # permalinks currently not working with this location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_intercept_errors off; fastcgi_index index.php; try_files $uri $uri/ /index.php?q=$uri&$args; # tried this as well, but still doesn't work # #if (!-e $request_filename) { # rewrite ^.*$ /index.php last; #} fastcgi_pass php1.local.mydomain.com:9000; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222511,222511#msg-222511 From kgs4242 at gmail.com Thu Feb 16 23:18:58 2012 From: kgs4242 at gmail.com (Kamil Gorlo) Date: Fri, 17 Feb 2012 00:18:58 +0100 Subject: Host header and SSL Message-ID: Hi, in my setup Nginx is a load balancer to many different services, some of them are using SSL (so Nginx is also SSL terminator in this case). I have many different IPs and for every IP it happen to be more than one domain (of course only in non-SSL situation). So I am using virtual hosts heavily with http and since my backends rely on Host header from user (it has to be correct) I have catch-all section for not matching server_names. Something like this ... (many different server sections with different server_names) ... server { listen IP1:80 default_server; listen IP2:80 default_server; serrver_name _; return 444; } But this technique simply does not work for SSL. As far I understand correctly there are two techniques to cope with my problem (to prevent https request with non-matching Host header to be served): 1. using if server { listen IP3:443 ssl default_server; server_name some_host.com; ssl_certificate... if ($host != "some_host.com") { return 444; } location / { ... proxy_set_header Host $host; // safe } } 2. using catch-all but slightly more complicated and weird: server { listen IP3:443 ssl; server_name some_host.com; (no ssl_certificate section - it is in catch-all block) location / { ... proxy_set_header Host $host; // safe because of catch-all below } } server { listen IP3:443 ssl default_server; server_name _; ssl_certificate... return 444; } What do you think? Are both solutions equivalent? Which one is preffered (more efficient, elegant)? Will it work? Thanks for your help! -- Kamil From nginx at abraumhal.de Thu Feb 16 23:37:20 2012 From: nginx at abraumhal.de (Sven Ludwig) Date: Fri, 17 Feb 2012 00:37:20 +0100 Subject: UserDir In-Reply-To: <82a22858fd94e0b455c2e88c2c416c72.NginxMailingListEnglish@forum.nginx.org> References: <82a22858fd94e0b455c2e88c2c416c72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120216233718.GU23688@Debian-60-squeeze-64-minimal> On 02-16 16:57, white_gecko wrote: > I have a very similar problem. I want to use userdirs as in apache and > as described here: http://wiki.nginx.org/UserDir and use php with > fast-cgi in these userdirs. I've tried it with this: > > location ~ ^/~(.+?)(/.*)?$ { > alias /home/$1/public_html$2; > index index.html index.htm; > autoindex on; > location ~ \.php$ { > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > } > } > > But it get empty pages, when I try to call a php-file. > How can I get php files with match this location to be forwarded to > fast-cgi? If you take a look into the fastcgi_params file you'll notice that the following or something similar is written: fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; $request_filename in your case is something like this: /~white_geko/some/where/is/my/script.php $fastcgi_script_name: /~white_geko/some/where/is/my/script.php so, this is not the correct path to find this file on the filesystem, so we map it: ^/~([^/]+)(/.*)\.php $1 is the username $2 is the filename + relative path so, we set the document_root to your public_html directory, which is somehow correct if your php script is placed in your public_html directory. root /home/$1/public_html; next we set the correct script_name which is relative to the document_root. We add an .php, because it is not included in our match($2) fastcgi_param SCRIPT_NAME $2.php; After this all php-fpm needs to know is where is the file located. fastcgi_param SCRIPT_FILENAME $document_root$2.php; when i sniffed the traffic between nginx <-> php-fpm i noticed that fastcgi_param only adds more parameters to the communication. it does not replace the old values. so i dropped the include of the params out and added all variables right in this location. The complete config section, try this: location ~ ^/~([^/]+)(/.*)\.php$ { root /home/$1/public_html; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_NAME $2.php; fastcgi_param SCRIPT_FILENAME $document_root$2.php; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; } location ~ ^/~([^/]+)(/.*)?$ { if ( !-d /home/$1/public_html ) { return 404; } index index.html index.htm index.php; autoindex on; } Hope that works for you :) bye MUH! -- ;__;___, )..(_=_) (oo)| | From edho at myconan.net Thu Feb 16 23:44:18 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 17 Feb 2012 06:44:18 +0700 Subject: Wordpress Permalinks In-Reply-To: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> References: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Fri, Feb 17, 2012 at 6:10 AM, justin wrote: > Howdy. > > So I am trying to get permalinks to work with Wordpress. I have read a > few articles/blog posts but still no luck. The permalink structure I am > trying to use is: > > http://mydomain.com/wp/index.php/2012/02/sample-post/ > I believe it's easier to use this structure: http://mydomain.com/wp/2012/02/sample-post/ (I don't know why wordpress doesn't offer this form for nginx by default) Anyway, you need to move try_files outside location ~ \.php$ block. You also need split path info (and remove $ to match /wp/index.php/2012/02/sample-post/) or add one more location block to specifically handle the url. > Here is the configuration block that I am currently using that is > important: > > # permalinks currently not working with this > location ~ \.php$ { > ? ? include /etc/nginx/fastcgi_params; > ? ? fastcgi_intercept_errors off; > ? ? fastcgi_index index.php; > > ? ? try_files $uri $uri/ /index.php?q=$uri&$args; > > ? ? # tried this as well, but still doesn't work > ? ? # > ? ? #if (!-e $request_filename) { > ? ? # ?rewrite ^.*$ /index.php last; > ? ? #} > > ? ? fastcgi_pass php1.local.mydomain.com:9000; > ? } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222511,222511#msg-222511 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at abraumhal.de Thu Feb 16 23:57:25 2012 From: nginx at abraumhal.de (Sven Ludwig) Date: Fri, 17 Feb 2012 00:57:25 +0100 Subject: Wordpress Permalinks In-Reply-To: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> References: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120216235722.GV23688@Debian-60-squeeze-64-minimal> On 02-16 18:10, justin wrote: > Howdy. > > So I am trying to get permalinks to work with Wordpress. I have read a > few articles/blog posts but still no luck. The permalink structure I am > trying to use is: > > http://mydomain.com/wp/index.php/2012/02/sample-post/ > > Here is the configuration block that I am currently using that is > important: > > # permalinks currently not working with this > location ~ \.php$ { > include /etc/nginx/fastcgi_params; > fastcgi_intercept_errors off; > fastcgi_index index.php; > > try_files $uri $uri/ /index.php?q=$uri&$args; > > # tried this as well, but still doesn't work > # > #if (!-e $request_filename) { > # rewrite ^.*$ /index.php last; > #} > > fastcgi_pass php1.local.mydomain.com:9000; > } this is a solution i found for running wordpress behind nginx and it works :) location / { set $rewrite_to_php "true"; if ( -f $request_filename) { set $rewrite_to_php "false"; } if ( -d $request_filename) { set $rewrite_to_php "false"; } if ( $rewrite_to_php = "true" ) { rewrite ^(.*)$ /index.php?q=$1 last; break; } } location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { expires max; } location ~* \.php$ { include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; } if you depend on try_files you might change the "location /" section this way: location / { try_files $uri $uri/ /index.php?q=$1; } i didn't test it, but as written in the wiki, this should work also :) http://wiki.nginx.org/HttpCoreModule#try_files bye MUH! -- ;__;___, )..(_=_) (oo)| | From edho at myconan.net Fri Feb 17 00:06:49 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 17 Feb 2012 07:06:49 +0700 Subject: Host header and SSL In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 6:18 AM, Kamil Gorlo wrote: > > server { > ?listen IP1:80 default_server; > ?listen IP2:80 default_server; > ?serrver_name _; > ?return 444; > } > > But this technique simply does not work for SSL. As far I understand > correctly there are two techniques to cope with my problem (to prevent > https request with non-matching Host header to be served): > It should work (at least passes `nginx -t` in my test). > > 2. using catch-all but slightly more complicated and weird: > > server { > ?listen IP3:443 ssl; > ?server_name some_host.com; > > ?(no ssl_certificate section - it is in catch-all block) > > ?location / { > ? ?... > ? ?proxy_set_header Host $host; // safe because of catch-all below > ?} > } > > server { > ?listen IP3:443 ssl default_server; > ?server_name _; > > ?ssl_certificate... > > ?return 444; > } > Nothing weird or complicated in this one. It's the preferred method but you need to specify ssl_certificate parameters on each server blocks. I'm not sure how it behaves on non-SNI environment though. Alternatively you can force passing some_host.com as the Host header to your proxy: proxy_set_header Host some_host.com -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From ft at falkotimme.com Fri Feb 17 02:45:53 2012 From: ft at falkotimme.com (Falko Timme) Date: Fri, 17 Feb 2012 03:45:53 +0100 Subject: Wordpress Permalinks References: <17d3eb7f8ed2ecae66f8fc992f2475b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51AFC3744B454907867619EBEE7B7658@notebook> This article might be of interest: http://www.howtoforge.com/running-wordpress-on-nginx-lemp-on-debian-squeeze-ubuntu-11.04 It's using the following permalink structure: /%year%/%monthnum%/%day%/%postname%/ But I think the other options should work as well. ----- Original Message ----- From: "justin" To: Sent: Friday, February 17, 2012 12:10 AM Subject: Wordpress Permalinks > Howdy. > > So I am trying to get permalinks to work with Wordpress. I have read a > few articles/blog posts but still no luck. The permalink structure I am > trying to use is: > > http://mydomain.com/wp/index.php/2012/02/sample-post/ > > Here is the configuration block that I am currently using that is > important: > > # permalinks currently not working with this > location ~ \.php$ { > include /etc/nginx/fastcgi_params; > fastcgi_intercept_errors off; > fastcgi_index index.php; > > try_files $uri $uri/ /index.php?q=$uri&$args; > > # tried this as well, but still doesn't work > # > #if (!-e $request_filename) { > # rewrite ^.*$ /index.php last; > #} > > fastcgi_pass php1.local.mydomain.com:9000; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222511,222511#msg-222511 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From quintinpar at gmail.com Fri Feb 17 03:25:47 2012 From: quintinpar at gmail.com (Quintin Par) Date: Fri, 17 Feb 2012 08:55:47 +0530 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files In-Reply-To: References: Message-ID: Quick Q: Epel and CentOS repo still refers to 0.84. Is this under the community influence to upgrade to 1+? -Quintin On Fri, Feb 17, 2012 at 3:47 AM, Teck Choon Giam wrote: > On Fri, Feb 17, 2012 at 6:04 AM, Bade Iriabho wrote: > > Hello, > > > > Following instructions on http://nginx.org/en/download.html, I tried to > > install NginX on CentOS 6 and it failed. See the three approaches I used > > below. > > > > First Try > > > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > $ rpm -Uvh > > > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > > Retrieving > > > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > > error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, key > ID > > 7bd9bf62 > > error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed > > Did you try: > > # rpm -vvv --rebuilddb > # yum clean all > # yum localinstall --nogpgcheck > > http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > # yum -y install nginx > > Thanks. > > Kindest regards, > Giam Teck Choon > > > > > > Second Try > > > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > $ nano /etc/yum.repos.d/CentOS-Nginx.repo > > # Type the following > > [nginx] > > name=nginx repo > > baseurl=http://nginx.org/packages/centos/6/$basearch/ > > gpgcheck=0 > > enabled=1 > > > > $ yum install nginx > > Loaded plugins: fastestmirror > > Loading mirror speeds from cached hostfile > > * base: dist1.800hosting.com > > * extras: centos.mirror.lstn.net > > * updates: mirror.raystedman.net > > base > > | 1.1 kB 00:00 > > c5-testing > > | 951 B 00:00 > > extras > > | 2.1 kB 00:00 > > nginx > > | 1.3 kB 00:00 > > nginx/primary > > | 2.6 kB 00:00 > > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: > [Errno > > -3] Error performing checksum > > Trying other mirror. > > nginx/primary > > | 2.6 kB 00:00 > > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: > [Errno > > -3] Error performing checksum > > Trying other mirror. > > Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more > > mirrors to try. > > > > Third Try > > > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > $ wget > > > http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > > --2012-02-16 15:55:30-- > > > http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > > Resolving nginx.org... 206.251.255.63 > > Connecting to nginx.org|206.251.255.63|:80... connected. > > HTTP request sent, awaiting response... 200 OK > > Length: 325076 (317K) [application/x-redhat-package-manager] > > Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' > > > > > 100%[===================================================================================================================================================================================================>] > > 325,076 944K/s in 0.3s > > > > 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' > saved > > [325076/325076] > > > > $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm > > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 signature: > BAD, > > key ID 7bd9bf62 > > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed > > > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > > > So can someone look into this, I am not sure who to ask. > > > > Regards, > > Bade I. > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 17 03:48:03 2012 From: nginx-forum at nginx.us (dapicester) Date: Thu, 16 Feb 2012 22:48:03 -0500 (EST) Subject: Dynamic reverse proxy configuration Message-ID: Hi, I am currently trying to develop a module that could allow Nginx act as a reverse proxy to a cluster of nodes using Zookeeper. I would like to ask if it is possible to develop such a module. I found a question on StackOverflow describing exactly my situation: http://stackoverflow.com/questions/8982717/is-there-any-way-to-configure-nginx-or-other-quick-reverse-proxy-dynamically The question is: is there any way to make nginx (or other proxy) read its config from Apache ZooKeeper? Or more broader: is there any way to effectively switch proxy configuration on fly? Any suggestion is appreciated :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222519,222519#msg-222519 From nginx-forum at nginx.us Fri Feb 17 03:49:44 2012 From: nginx-forum at nginx.us (karanj) Date: Thu, 16 Feb 2012 22:49:44 -0500 (EST) Subject: nginx http auth module query Message-ID: Hi, I have the following use case - I have nginx running at port 80 and a php hiphop server running at 4247. I want to achieve the following configuration - Whenever a request is received at nginx port 80, it should be sent to some auth_url (say auth.php) and if it is authorized then it should be forwarded/proxied to hiphop server running at 4247. If not some error page should be thrown. I was looking through ngx_http_auth_request_module and other inbuilt modules. But I have the following doubts - 1) What could the possible config look like where both ngx_http_auth_request_module and proxy_pass are included? 2) For my auth.php, what should it return true/false or something else? Thanks, Karan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222520#msg-222520 From appa at perusio.net Fri Feb 17 04:00:52 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 17 Feb 2012 04:00:52 +0000 Subject: nginx http auth module query In-Reply-To: References: Message-ID: <87fweam7m3.wl%appa@perusio.net> On 17 Fev 2012 03h49 WET, nginx-forum at nginx.us wrote: > Hi, > > I have the following use case - > > I have nginx running at port 80 and a php hiphop server running at > 4247. I want to achieve the following configuration - > > Whenever a request is received at nginx port 80, it should be sent > to some auth_url (say auth.php) and if it is authorized then it > should be forwarded/proxied to hiphop server running at 4247. If not > some error page should be thrown. > > I was looking through ngx_http_auth_request_module and other inbuilt > modules. But I have the following doubts - > > 1) What could the possible config look like where both > ngx_http_auth_request_module and proxy_pass are included? > 2) For my auth.php, what should it return true/false or something > else? > > Thanks, > Karan Schematically: location /index.php { error_page 401 403 /not_authorized.html; auth_request /auth.php; proxy_pass http://hiphop:4247; } location = /auth.php { # FCGI stuff or whatever PHP CGI you're using. # auth.php should return 401 or 403 when auth process fails, return # 200 otherwise } --- appa From agentzh at gmail.com Fri Feb 17 04:06:56 2012 From: agentzh at gmail.com (agentzh) Date: Fri, 17 Feb 2012 12:06:56 +0800 Subject: Memory leaks nginx with lua memc upstream_keepalive modules/ In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 12:52 AM, fish_ka_praha wrote: > [root at VKRT083 /home/fish]# uname -a > FreeBSD VKRT083.local 8.2-RELEASE-p3 FreeBSD 8.2-RELEASE-p3 #0: Tue Sep > 27 18:45:57 UTC 2011 > root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC ?amd64 > [root at VKRT083 /home/fish]# /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.0.12 > built by gcc 4.2.1 20070719 ?[FreeBSD] > configure arguments: > --add-module=../chaoslawful-lua-nginx-module-7cb4e09/ > --add-module=../ngx_http_upstream_keepalive-d9ac9ad67f45 > --add-module=../simpl-ngx_devel_kit-24202b4/ > --add-module=../agentzh-memc-nginx-module-4007350/ > Thank you for the report. But could you show me your Lua code? Or could you prepare a minimized example that can reproduce this leak? Without knowing how you actually use ngx_lua, I'm afraid we cannot find the real cause here :) Thanks! -agentzh From nginx-forum at nginx.us Fri Feb 17 04:08:17 2012 From: nginx-forum at nginx.us (karanj) Date: Thu, 16 Feb 2012 23:08:17 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <87fweam7m3.wl%appa@perusio.net> References: <87fweam7m3.wl%appa@perusio.net> Message-ID: <06e86fbebd6aa15fcef5a8bead9f11b4.NginxMailingListEnglish@forum.nginx.org> Thanks for the response. For this - location /index.php { error_page 401 403 /not_authorized.html; auth_request /auth.php; proxy_pass http://hiphop:4247; } Does it mean that auth.php should be available via the url - http://hiphop:4247/auth.php ? location = /auth.php { # FCGI stuff or whatever PHP CGI you're using. # auth.php should return 401 or 403 when auth process fails, return # 200 otherwise } Does it enter this section after it gets 2xx response from auth.php? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222523#msg-222523 From appa at perusio.net Fri Feb 17 04:28:41 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 17 Feb 2012 04:28:41 +0000 Subject: nginx http auth module query In-Reply-To: <06e86fbebd6aa15fcef5a8bead9f11b4.NginxMailingListEnglish@forum.nginx.org> References: <87fweam7m3.wl%appa@perusio.net> <06e86fbebd6aa15fcef5a8bead9f11b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87ehtum6bq.wl%appa@perusio.net> On 17 Fev 2012 04h08 WET, nginx-forum at nginx.us wrote: > Thanks for the response. > > For this - > > location /index.php { > error_page 401 403 /not_authorized.html; > auth_request /auth.php; > proxy_pass http://hiphop:4247; > } > > Does it mean that auth.php should be available via the url - > http://hiphop:4247/auth.php ? You must create a location that overrides the "default" PHP handling location. > location = /auth.php { > # FCGI stuff or whatever PHP CGI you're using. > > # auth.php should return 401 or 403 when auth process fails, return > # 200 otherwise > } location = /auth.php { proxy_pass_request_body off; proxy_set_header Content-Length ''; proxy_set_header X-Original-URI $request_uri; proxy_pass http://hiphop:4247; } Note that the auth_request module only uses the headers. So your auth.php authorization script must take that into account. > Does it enter this section after it gets 2xx response from auth.php? When the /auth.php location returns 200 then the request is *authorized* and the request is proxy passed to the hiphop upstream in the index.php location from the above example. --- appa From kgorlo at gmail.com Fri Feb 17 07:15:11 2012 From: kgorlo at gmail.com (Kamil Gorlo) Date: Fri, 17 Feb 2012 08:15:11 +0100 Subject: Host header and SSL In-Reply-To: References: Message-ID: On Fri, Feb 17, 2012 at 1:06 AM, Edho Arief wrote: > On Fri, Feb 17, 2012 at 6:18 AM, Kamil Gorlo wrote: >> >> server { >> ?listen IP1:80 default_server; >> ?listen IP2:80 default_server; >> ?serrver_name _; >> ?return 444; >> } >> >> But this technique simply does not work for SSL. As far I understand >> correctly there are two techniques to cope with my problem (to prevent >> https request with non-matching Host header to be served): >> > > It should work (at least passes `nginx -t` in my test). You mean soultion no. 1 (the one with if in server block, which you - maybe accidentally - cut off)? >> >> 2. using catch-all but slightly more complicated and weird: >> >> server { >> ?listen IP3:443 ssl; >> ?server_name some_host.com; >> >> ?(no ssl_certificate section - it is in catch-all block) >> >> ?location / { >> ? ?... >> ? ?proxy_set_header Host $host; // safe because of catch-all below >> ?} >> } >> >> server { >> ?listen IP3:443 ssl default_server; >> ?server_name _; >> >> ?ssl_certificate... >> >> ?return 444; >> } >> > > Nothing weird or complicated in this one. It's the preferred method > but you need to specify ssl_certificate parameters on each server > blocks. I'm not sure how it behaves on non-SNI environment though. By writing 'weird' I meant that ssl configuration is not in one place (in the server_name with corresponding server_name) but instead in some weird 'server_name _' block which maybe confusing for some non-experienced Nginx config writers :P Performance wisely - is 1 and 3 imperceptible? > Alternatively you can force passing some_host.com as the Host header > to your proxy: > > proxy_set_header Host some_host.com > No, this is not exactly what I want because: a) it does not work when I have server_name like *.some_host.com (of course in combination with some wildcard certificate) b) it tells backend that user came with some_host.com which is not true Thanks for your help. Cheers, -- Kamil From nginx-forum at nginx.us Fri Feb 17 07:22:11 2012 From: nginx-forum at nginx.us (justin) Date: Fri, 17 Feb 2012 02:22:11 -0500 (EST) Subject: Wordpress Permalinks In-Reply-To: <51AFC3744B454907867619EBEE7B7658@notebook> References: <51AFC3744B454907867619EBEE7B7658@notebook> Message-ID: Falko, Thanks for the link, that worked, got permalinks going. Whoot. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222511,222529#msg-222529 From nginx-forum at nginx.us Fri Feb 17 08:00:59 2012 From: nginx-forum at nginx.us (strongpapa) Date: Fri, 17 Feb 2012 03:00:59 -0500 (EST) Subject: Dynamic reverse proxy configuration In-Reply-To: References: Message-ID: I recommend you use the ngx_lua module here's some example (http://openresty.org/#DynamicRoutingBasedOnRedis) you can change redis to ngx_lua's internal dict facility which is shared between workers for speed. and write some location handler to accept outside configuration data and save it in ngx_lua's dict. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222519,222531#msg-222531 From nginxyz at mail.ru Fri Feb 17 08:28:14 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 17 Feb 2012 12:28:14 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: <20120216160718.GA67687@mdounin.ru> References: <20120216160718.GA67687@mdounin.ru> Message-ID: Hello! 16 ??????? 2012, 20:07 ?? Maxim Dounin : > Hello! > > On Thu, Feb 16, 2012 at 08:16:03AM +0400, Max wrote: > > > > > 15 ??????? 2012, 18:50 ?? Maxim Dounin : > > > Hello! > > > > > > On Wed, Feb 15, 2012 at 08:56:49AM -0500, Maxim Khitrov wrote: > > > > > > > Hello Maxim, > > > > > > > > Back in 2010 you wrote that it's not likely that your > > > > http_auth_request_module would make it into nginx core. I'm curious if > > > > anything has changed over the past two years? > > > > > > > > It's not that compiling this module into nginx is a problem > > > > (especially on FreeBSD), but I think a lot of people are inherently > > > > weary of depending on 3rd-party modules, since there is no guarantee > > > > of continued support. > > > > > > > > What do you think about adding your module to the main nginx repository? > > > > > > There are no immediate plans, but this may happen somewhere in the > > > future. > > > > Hello fellow Maxims and others, > > > > I took a closer look at the auth_request module source code today and > > realized that I was partially wrong about auth_request authorization > > subrequests causing the entire requested file to be retrieved from the > > backend server. I apologize for the confusion my posts may have > > caused. Due to sr->header_only being set to 1, the connection to the > > backend server is terminated from within ngx_http_upstream_send_response() > > as soon as the HTTP request status code is received. > > Yes. This is basically a workaround for cases when people > unintentionally return data to auth subrequest, it makes sure that > no unexpected data are sent to client in any case. > > [...] > > > All of these issues can be avoided simply by using HEAD method > > requests for authorization subrequests. According to my > > Using HEAD is not an option in auth_request itself, as it doesn't > know how auth subrequest will be handled. E.g. it may be passed to > fastcgi, or even hit static file. > > If you handle auth subrequests with proxy_pass, you may use > proxy_set_method to issue HEAD requests to backend. Or you may > use correct auth endpoint which doesn't return unneeded data. > > [...] > > > I have also modified the auth_request module to use HEAD method > > authorization subrequests by default. This setting can be > > overridden in the configuration file by using the proxy_method > > directive, of course. > > > > You can find my auth_request module patch here: > > > > > https://nginxyzpro.berlios.de/patch-head.ngx_http_auth_request_module.c.20120215.diff > > The patch is wrong by design, see above. Moreover, it makes it > impossible to correctly pass original request method to auth > endpoint. Maxim, you haven't even taken a look at my patch, have you? Because if you had, you wouldn't have made such unsubstantiated claims. First of all, I am referring to subrequests in the context of subquests created by the ngx_http_subrequest() function. As you might recall, the auth_request function ngx_http_auth_request_handler() calls the ngx_http_subrequest() function to create a subrequest, which inherits most of the values from the original request, BUT the method values are not inherited by the subrequest - they are explicitly set to the GET method: nginx-1.1.15/src/http/ngx_http_core_module.c: 2361 ngx_http_subrequest(ngx_http_request_t *r, 2362 ngx_str_t *uri, ngx_str_t *args, ngx_http_request_t **psr, 2363 ngx_http_post_subrequest_t *ps, ngx_uint_t flags) 2364 { ... 2417 sr->method = NGX_HTTP_GET; 2434 sr->method_name = ngx_http_core_get_method; ... 2487 } So your auth_request module NEVER DID pass original request methods on to subrequests in the first place! By default all auth_request subrequests using the proxy_pass directive have been, still are, and will be (until my patch is applied) GET method requests. My patch changes the default method from GET to HEAD by changing sr->method, sr->method_name and sr->request_line accordingly. It does NOT in any way interfere with anything else, you can use whatever authentication mechanism and endpoint you want - static files, fastcgi_pass, postgres_pass, etc. Moreover, you are wrong about my patch making it impossible to correctly pass the original request method to authentication endpoints. The original request is fully preserved (along with the entire original request) and can be accessed through the $request_method variable inside the subrequest location block. Feel free to verify this for yourself, if you don't believe me: location /private/ { auth_request /auth; set $request_method_private $request_method; } location = /auth { set $request_method_auth $request_method; } Here's what you'll see: Location Original module (v0.2) Patched module (v20120215) --------- ---------------------- ----------------------------- GET /private/ HTTP/1.0 GET /private/ HTTP/1.0 /private/ $request_method: GET $request_method: GET /auth $request_method: GET $request_method: GET (intact) subrequest method: GET subrequest method: HEAD HEAD /private/ HTTP/1.0 HEAD /private/ HTTP/1.0 /private/ $request_method: HEAD $request_method: HEAD /auth $request_method: HEAD $request_method: HEAD (intact) subrequest method: GET subrequest method: HEAD My patch simply makes the proxy_pass directive use the HEAD request method by default in auth_request subrequest location blocks. The same can be achieved by using the proxy_method directive ("proxy_method HEAD;"), but I believe the HEAD method should be used by default. Why? Because, IMNSHO, it does everything the GET method does (including Basic access authentication WWW-Authenticate / Authorization header exchange), but in a much more elegant and efficient way, which also makes your sr->header_only=1 workaround unnecessary. I left your workaround the way it is to prevent people from shooting themselves in the foot by setting the proxy method to GET, but those who know about the proxy_method directive (BTW, you got the name wrong, there is no proxy_set_method directive), should know what they are doing. BTW, the proxy_method directive is missing from the official documentation, so you may want to fix that: http://nginx.org/en/docs/http/ngx_http_proxy_module.html Most people might be using other authentication methods that this patch in no way affects, but those who do use the pass_proxy directive for subrequests would benefit from my patch without losing any of the functionality. Moreover, if they really need to use the GET method, or any other method, they still can. I did some more testing and it turns out that even under moderate load the backend server keeps sending another 150-200 kb before detecting the frontend server had closed the connection on its end. Compare that to 1200-1400 BYTES using the HEAD method even under heavy load. With large files and per-file access rules on the backend server that means ten GET method subrequests per second are wasting at least 10 Mbps worth of bandwidth when 100 kbps would have done the job in a way that would also help reduce the load on the backend server. So my question to you is: why would you not want to optimize your module? I thought nginx was supposed to be about efficiency. All my claims made here are based on facts and can be verified by anyone who has the time, the willingness, and the capacity to understand and apply what I've written. I know you understand what I've written, but if you're too busy or can't be bothered to deal with this, just say so, but please don't jump to conclusions or make unsubstantiated claims based on what you think I MIGHT have meant instead of checking the facts because such claims will only discredit you and alienate potential developers. Max From nginxyz at mail.ru Fri Feb 17 08:43:39 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Fri, 17 Feb 2012 12:43:39 +0400 Subject: Bug report: missing SCARCE string innginx-1.1.15/src/http/ngx_http_file_cache.c [patch] In-Reply-To: <20120216100758.GX67687@mdounin.ru> References: <20120216100758.GX67687@mdounin.ru> Message-ID: 16 ??????? 2012, 14:08 ?? Maxim Dounin : > On Thu, Feb 16, 2012 at 01:08:58PM +0400, Max wrote: > > the value of the NGX_HTTP_CACHE_SCARCE cache status is defined in > > nginx-1.1.15/src/http/ngx_http_cache.h, but unlike the other cache > > status strings, it's missing from > nginx-1.1.15/src/http/ngx_http_file_cache.c. > > > > The function ngx_http_upstream_cache_status() in > > nginx-1.1.15/src/http/ngx_http_upstream.c references the status > > strings directly as ngx_http_cache_status[n].len, so with the > > SCARCE cache status string missing, this is a segmentation violation > > waiting to happen. > > > > Here's the patch to fix the problem: > > > > > > --- src/http/ngx_http_file_cache.c.orig 2012-02-16 00:18:21.000000000 -0800 > > +++ src/http/ngx_http_file_cache.c 2012-02-16 00:25:00.000000000 -0800 > > @@ -53,7 +53,8 @@ > > ngx_string("EXPIRED"), > > ngx_string("STALE"), > > ngx_string("UPDATING"), > > - ngx_string("HIT") > > + ngx_string("HIT"), > > + ngx_string("SCARCE") > > }; > > --- > > The NGX_HTTP_CACHE_SCARCE value can't appear in u->cache_status, > and hence there is no real problem. It's a special value used by > cache to inform upstream that there is no cached response (i.e. > MISS cache status) and cacheing should be enabled due to min_uses > preventing it. You mean DISABLED due to file cache node exists being 0 or min_uses being set too high? nginx-1.1.15/src/http/ngx_http_upstream.c: 652 ngx_http_upstream_cache(ngx_http_request_t *r, ngx_http_upstream_t *u) 653 { ... 719 rc = ngx_http_file_cache_open(r); ... 742 switch (rc) { ... 774 case NGX_HTTP_CACHE_SCARCE: 775 776 u->cacheable = 0; 777 778 break; ... 795 } 800 } Max From nginx-forum at nginx.us Fri Feb 17 08:53:51 2012 From: nginx-forum at nginx.us (karanj) Date: Fri, 17 Feb 2012 03:53:51 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <87ehtum6bq.wl%appa@perusio.net> References: <87ehtum6bq.wl%appa@perusio.net> Message-ID: <34113a6878d62f83acc02cc73b77e591.NginxMailingListEnglish@forum.nginx.org> Thanks it works ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222535#msg-222535 From nginx-forum at nginx.us Fri Feb 17 08:57:50 2012 From: nginx-forum at nginx.us (karanj) Date: Fri, 17 Feb 2012 03:57:50 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <34113a6878d62f83acc02cc73b77e591.NginxMailingListEnglish@forum.nginx.org> References: <87ehtum6bq.wl%appa@perusio.net> <34113a6878d62f83acc02cc73b77e591.NginxMailingListEnglish@forum.nginx.org> Message-ID: One additional question here - In this as I understand it redirects to error_page on receiving a 4xx status code. Is it possible that nginx reads the value of error page from a custom header which comes along with the response (with 4xx status code) and then assign the value of error_page to that value. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222536#msg-222536 From nginx-forum at nginx.us Fri Feb 17 09:40:06 2012 From: nginx-forum at nginx.us (white_gecko) Date: Fri, 17 Feb 2012 04:40:06 -0500 (EST) Subject: UserDir In-Reply-To: <20120216233718.GU23688@Debian-60-squeeze-64-minimal> References: <20120216233718.GU23688@Debian-60-squeeze-64-minimal> Message-ID: <732d6107adf1420647db5c5d9a0a0092.NginxMailingListEnglish@forum.nginx.org> Thank you, even though I understand it only partially it works. But directory listing doesn't work anymore (e.g. http://localhost/~white_gecko/) I get 404. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221861,222538#msg-222538 From simone.fumagalli at contactlab.com Fri Feb 17 11:09:25 2012 From: simone.fumagalli at contactlab.com (Simone Fumagalli) Date: Fri, 17 Feb 2012 12:09:25 +0100 Subject: Nginx cache tuning / debug Message-ID: <4F3E3565.8080907@contactlab.com> Hello. I cache contents with NGINX proxy_cache in all my sites but I'm not sure I've the best setup. Are there rules/hints to properly set the size and/or the TTL for the cache ? Would be nice to have a tool that monitor/track files written in the cache and those deleted (an the reason as well) Is there such a tool or can I enable some logging feature to get these data ? Thanks -- Simone From mdounin at mdounin.ru Fri Feb 17 11:19:04 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Feb 2012 15:19:04 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: References: <20120216160718.GA67687@mdounin.ru> Message-ID: <20120217111904.GD67687@mdounin.ru> Hello! On Fri, Feb 17, 2012 at 12:28:14PM +0400, Max wrote: > > Hello! > > 16 ??????? 2012, 20:07 ?? Maxim Dounin : > > Hello! > > > > On Thu, Feb 16, 2012 at 08:16:03AM +0400, Max wrote: > > > > > > > > 15 ??????? 2012, 18:50 ?? Maxim Dounin : > > > > Hello! > > > > > > > > On Wed, Feb 15, 2012 at 08:56:49AM -0500, Maxim Khitrov wrote: > > > > > > > > > Hello Maxim, > > > > > > > > > > Back in 2010 you wrote that it's not likely that your > > > > > http_auth_request_module would make it into nginx core. I'm curious if > > > > > anything has changed over the past two years? > > > > > > > > > > It's not that compiling this module into nginx is a problem > > > > > (especially on FreeBSD), but I think a lot of people are inherently > > > > > weary of depending on 3rd-party modules, since there is no guarantee > > > > > of continued support. > > > > > > > > > > What do you think about adding your module to the main nginx repository? > > > > > > > > There are no immediate plans, but this may happen somewhere in the > > > > future. > > > > > > Hello fellow Maxims and others, > > > > > > I took a closer look at the auth_request module source code today and > > > realized that I was partially wrong about auth_request authorization > > > subrequests causing the entire requested file to be retrieved from the > > > backend server. I apologize for the confusion my posts may have > > > caused. Due to sr->header_only being set to 1, the connection to the > > > backend server is terminated from within ngx_http_upstream_send_response() > > > as soon as the HTTP request status code is received. > > > > Yes. This is basically a workaround for cases when people > > unintentionally return data to auth subrequest, it makes sure that > > no unexpected data are sent to client in any case. > > > > [...] > > > > > All of these issues can be avoided simply by using HEAD method > > > requests for authorization subrequests. According to my > > > > Using HEAD is not an option in auth_request itself, as it doesn't > > know how auth subrequest will be handled. E.g. it may be passed to > > fastcgi, or even hit static file. > > > > If you handle auth subrequests with proxy_pass, you may use > > proxy_set_method to issue HEAD requests to backend. Or you may > > use correct auth endpoint which doesn't return unneeded data. > > > > [...] > > > > > I have also modified the auth_request module to use HEAD method > > > authorization subrequests by default. This setting can be > > > overridden in the configuration file by using the proxy_method > > > directive, of course. > > > > > > You can find my auth_request module patch here: > > > > > > > > https://nginxyzpro.berlios.de/patch-head.ngx_http_auth_request_module.c.20120215.diff > > > > The patch is wrong by design, see above. Moreover, it makes it > > impossible to correctly pass original request method to auth > > endpoint. > > Maxim, you haven't even taken a look at my patch, have you? Because > if you had, you wouldn't have made such unsubstantiated claims. I have, despite the fact that it was provided as a link only. [...] > Moreover, you are wrong about my patch making it impossible > to correctly pass the original request method to authentication > endpoints. The original request is fully preserved (along with > the entire original request) and can be accessed through the > $request_method variable inside the subrequest location block. > Feel free to verify this for yourself, if you don't believe me: I stand corrected. Your patch broke only $request variable, not the $request_method (which always comes from the main request). [...] > The same can be achieved by using the proxy_method directive > ("proxy_method HEAD;"), but I believe the HEAD method should be used > by default. Why? Because, IMNSHO, it does everything the GET method does > (including Basic access authentication WWW-Authenticate / Authorization > header exchange), but in a much more elegant and efficient way, which > also makes your sr->header_only=1 workaround unnecessary. Again: the sr->header_only workaround is anyway required, as static file, or memcached, or fastcgi may be used as handler for auth subrequest (and, actually, even some broken http backends may return data to HEAD, not even talking about intended changes of proxy_method). Without sr->header_only explicitly set you will get response content before headers of the real response: HEAD / HTTP/1.0 BOOM HTTP/1.1 200 OK Server: nginx/1.1.15 Date: Fri, 17 Feb 2012 10:11:35 GMT Content-Type: text/html Content-Length: 1047 Connection: close Last-Modified: Mon, 13 Feb 2012 01:20:52 GMT Accept-Ranges: bytes The "BOOM" string is from static file used as auth_request handler. (Obviously the sr->header_only workaround was removed for testing.) If the question was about proxy only, I wouldn't added the workaround in the first place, but recommended proxy_method in docs instead. And, BTW, as far as I recall your code, it won't set sr->header_only in case of HEAD requests. This is wrong, you still need it even for HEADs. > I left your workaround the way it is to prevent people from shooting > themselves in the foot by setting the proxy method to GET, but those > who know about the proxy_method directive (BTW, you got the name wrong, > there is no proxy_set_method directive), should know what they are doing. As I already said before, the whole sr->header_only thing is a workaround to prevent people from unintentionally breaking protocol. Your patch tries to make the workaround a bit more smart, and tries to make arbitrary configuration more efficient, but this is wrong aproach: instead, it should be made less intrusive. The major problem with the workaround as of now is that it prevents caching. And *this* should be addressed. > BTW, the proxy_method directive is missing from the official > documentation, so you may want to fix that: > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html This will be addressed in near future. [...] > I did some more testing and it turns out that even under moderate load > the backend server keeps sending another 150-200 kb before detecting > the frontend server had closed the connection on its end. Compare > that to 1200-1400 BYTES using the HEAD method even under heavy > load. With large files and per-file access rules on the backend server > that means ten GET method subrequests per second are wasting at least > 10 Mbps worth of bandwidth when 100 kbps would have done the job in a > way that would also help reduce the load on the backend server. > > So my question to you is: why would you not want to optimize your module? > I thought nginx was supposed to be about efficiency. Both using proxy_method and using auth endpoint which doesn't return data do the same if you are talking about efficiency. On the other hand, setting method/request line increase comlexity and overhead in normal case, as well as subject to bugs (at least two were identified above). Hope my position is clear enough. [...] Maxim Dounin From mdounin at mdounin.ru Fri Feb 17 11:23:10 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Feb 2012 15:23:10 +0400 Subject: Bug report: missing SCARCE string innginx-1.1.15/src/http/ngx_http_file_cache.c [patch] In-Reply-To: References: <20120216100758.GX67687@mdounin.ru> Message-ID: <20120217112310.GE67687@mdounin.ru> Hello! On Fri, Feb 17, 2012 at 12:43:39PM +0400, Max wrote: > > 16 ??????? 2012, 14:08 ?? Maxim Dounin : > > On Thu, Feb 16, 2012 at 01:08:58PM +0400, Max wrote: > > > the value of the NGX_HTTP_CACHE_SCARCE cache status is defined in > > > nginx-1.1.15/src/http/ngx_http_cache.h, but unlike the other cache > > > status strings, it's missing from > > nginx-1.1.15/src/http/ngx_http_file_cache.c. > > > > > > The function ngx_http_upstream_cache_status() in > > > nginx-1.1.15/src/http/ngx_http_upstream.c references the status > > > strings directly as ngx_http_cache_status[n].len, so with the > > > SCARCE cache status string missing, this is a segmentation violation > > > waiting to happen. > > > > > > Here's the patch to fix the problem: > > > > > > > > > --- src/http/ngx_http_file_cache.c.orig 2012-02-16 00:18:21.000000000 -0800 > > > +++ src/http/ngx_http_file_cache.c 2012-02-16 00:25:00.000000000 -0800 > > > @@ -53,7 +53,8 @@ > > > ngx_string("EXPIRED"), > > > ngx_string("STALE"), > > > ngx_string("UPDATING"), > > > - ngx_string("HIT") > > > + ngx_string("HIT"), > > > + ngx_string("SCARCE") > > > }; > > > --- > > > > The NGX_HTTP_CACHE_SCARCE value can't appear in u->cache_status, > > and hence there is no real problem. It's a special value used by > > cache to inform upstream that there is no cached response (i.e. > > MISS cache status) and cacheing should be enabled due to min_uses > > preventing it. > > You mean DISABLED due to file cache node exists being 0 or min_uses > being set too high? Yep, s/should/should not/. Maxim Dounin From mdounin at mdounin.ru Fri Feb 17 11:26:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Feb 2012 15:26:24 +0400 Subject: nginx http auth module query In-Reply-To: References: <87ehtum6bq.wl%appa@perusio.net> <34113a6878d62f83acc02cc73b77e591.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120217112624.GF67687@mdounin.ru> Hello! On Fri, Feb 17, 2012 at 03:57:50AM -0500, karanj wrote: > One additional question here - > > In this as I understand it redirects to error_page on receiving a 4xx > status code. Is it possible that nginx reads the value of error page > from a custom header which comes along with the response (with 4xx > status code) and then assign the value of error_page to that value. You may use use auth_request_set to make headers available as variables in main request. See docs here: http://mdounin.ru/hg/ngx_http_auth_request_module/file/tip/README#l23 Sample usage may be seen in tests here: http://mdounin.ru/hg/ngx_http_auth_request_module/file/tip/t/auth-request-set.t Maxim Dounin From gt0057 at gmail.com Fri Feb 17 12:39:40 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Fri, 17 Feb 2012 13:39:40 +0100 Subject: Auth user with postgresql Message-ID: Hi list I am a nginx newbie. Nginx does not ask user and password with the following configuration: ....... upstream database { postgres_server 127.0.0.1 dbname=dbnginx user=nginx password=secret; } server { listen 80; server_name localhost; index index.htm index.html; location =/t1 { internal; postgres_escape $user $remote_user; postgres_escape $pass $remote_passwd; postgres_pass database; postgres_query "SELECT user FROM usertable WHERE user=$user AND pwd=$pass"; postgres_rewrite no_rows 403; postgres_output none; } location /test //don't request window for user and password { auth_basic "folder test1"; auth_request /t1; } location /test2 //o.k. request window for user and password { auth_basic "folder Test2"; auth_basic_user_file /web/test2/.passwd; } ----------- I did several searches on google but found nothing. Where is the mistake? Thanks for the help Giuseppe P.S: the database connection is ok and password is stored in MD5 follow my compilation config ./configure \ --prefix=/usr/local/nginx \ --sbin-path=/usr/local/nginx/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --http-uwsgi-temp-path=/var/tmp/nginx \ --http-scgi-temp-path=/var/tmp/nginx \ --user=nobody \ --group=nobody \ --with-ipv6 \ --with-http_dav_module \ --with-http_ssl_module \ --with-http_flv_module \ --with-http_gzip_static_module \ --http-log-path=/var/log/nginx/access.log \ --http-client-body-temp-path=/var/tmp/nginx/client/ \ --http-proxy-temp-path=/var/tmp/nginx/proxy/ \ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ \ --add-module=/home/mercurio/nginx/ngx_http_auth_request_module-a29d74804ff1 \ --add-module=/home/mercurio/nginx/FRiCKLE-ngx_coolkit-cb99a0f \ --add-module=/home/mercurio/nginx/agentzh-nginx-eval-module-09d7728 \ --add-module=/home/mercurio/nginx/ngx_postgres-0.9 From piotr.sikora at frickle.com Fri Feb 17 12:49:28 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Fri, 17 Feb 2012 13:49:28 +0100 Subject: Auth user with postgresql In-Reply-To: References: Message-ID: Hi, > Nginx does not ask user and password with the following configuration: That's because you're returning 403 (Forbidden) instead of 401 (Unauthorized). I should update README file, because people get confused by this ;) > postgres_rewrite no_rows 403; -postgres_rewrite no_rows 403; +postgres_rewrite no_rows 401; +more_set_headers -s 401 'WWW-Authenticate: Basic realm="Restricted"'; You'll also need ngx_headers_more module for this to work: https://github.com/agentzh/headers-more-nginx-module Best regards, Piotr Sikora < piotr.sikora at frickle.com > From francis at daoine.org Fri Feb 17 12:54:14 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Feb 2012 12:54:14 +0000 Subject: Auth user with postgresql In-Reply-To: References: Message-ID: <20120217125414.GG22076@craic.sysops.org> On Fri, Feb 17, 2012 at 01:39:40PM +0100, Giuseppe Tofoni wrote: Hi there, > I did several searches on google but found nothing. > Where is the mistake? > Thanks for the help This looks very like. http://forum.nginx.org/read.php?2,220692 That may be useful. All the best, f -- Francis Daly francis at daoine.org From hyperstruct at gmail.com Fri Feb 17 13:03:17 2012 From: hyperstruct at gmail.com (Massimiliano Mirra) Date: Fri, 17 Feb 2012 14:03:17 +0100 Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: On Wed, Feb 15, 2012 at 3:55 PM, rmalayter wrote: > > There's no reason the "backend" for your caching layer cannot be another > nginx server block running on a high port bound to localhost. This > high-port server block could do gzip compression, and proxy-pass to the > back end with "Accept-Encoding: identity", so the back-end never has to > do compression. The backend server will have to use "gzip_http_version > 1.0" and "gzip_proxied any" to do compression because it is being > proxied from the front-end. > Ah, good point. I tried to take this an extra step further by using a virtual host of the same server as "compression backend" and it appears to work nicely. Below is what I did so far, in case anyone is looking for the same and Google leads them here. (Feels a bit like getting out of the door and back in through the window :) but perhaps just like we have internal redirects it would be possible to use ngx_lua to simulate an internal proxy and avoid the extra HTTP request.) proxy_cache_path /var/lib/nginx/cache/myapp levels=1:2 keys_zone=myapp_cache:10m max_size=1g inactive=2d; log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' 'Args: $args '; access_log /var/log/nginx/cache.log cache; upstream backend { server localhost:8002; } server { # this step only does compression listen 85; server_name myapp.local; include proxy_params; location / { gzip_http_version 1.0; proxy_set_header Accept-Encoding identity; proxy_pass http://backend; } } server { listen 80; server_name myapp.local; include proxy_params; location / { proxy_pass http://127.0.0.1:85; } location /similar-to { proxy_set_header Accept-Encoding gzip; proxy_cache_key "$scheme$host$request_uri"; proxy_cache_valid 2d; proxy_cache myapp_cache; proxy_pass http://127.0.0.1:85; } } > Also note there may be better options in the latest nginx versions, or > by using the gunzip 3rd-party module: > http://mdounin.ru/hg/ngx_http_gunzip_filter_module/file/27f057249155/README > > With the gunzip module, you can configure things so that you always > cache compressed data, then only decompress it for the small number of > clients that don't support gzip compression. > This looks perfect for having a gzip-only cache, which may not lead to save that much disk space but it certainly helps with mind space. Cheers, Massimiliano -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Feb 17 13:30:33 2012 From: nginx-forum at nginx.us (karanj) Date: Fri, 17 Feb 2012 08:30:33 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <20120217112624.GF67687@mdounin.ru> References: <20120217112624.GF67687@mdounin.ru> Message-ID: <002045ce7ddd2b4a8752b5000951f682.NginxMailingListEnglish@forum.nginx.org> It doesn't work for me. I have 3 files running under HipHop - 1) /tf/test.php - This file sets the session variable - $_SESSION['X-ErrorPage']='/tf2/test.php'; and then sends header("HTTP/1.1 401 Unauthorized"); 2) /tf2/test2.php - This prints "It works" 3) /tf2/test.php - This prints "Error Page" My config looks like this - The output on running http://112.11.23.221:8080/tf2/test2.php should be - "Error Page". But this is not happening. The nginx error logs shows the following - 2012/02/17 18:52:45 [error] 10103#0: *4 the rewritten URI has a zero length, client: 122.179.93.88, server: 112.11.23.221, request: "GET /tf2/test2.php HTTP/1.1", host: "112.11.23.221:8080" server { listen 8080; server_name 112.11.23.221; location / { auth_request /tf/test.php; proxy_pass http://127.0.0.1:4247; error_page 401 = /40x.html; } location /tf/test.php{ proxy_pass http://127.0.0.1:4247; } location = /40x.html { auth_request_set $err $upstream_http_x_errorpage; rewrite /40x.html $err; proxy_pass http://127.0.0.1:4247; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222552#msg-222552 From nginx-forum at nginx.us Fri Feb 17 13:38:48 2012 From: nginx-forum at nginx.us (karanj) Date: Fri, 17 Feb 2012 08:38:48 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <002045ce7ddd2b4a8752b5000951f682.NginxMailingListEnglish@forum.nginx.org> References: <20120217112624.GF67687@mdounin.ru> <002045ce7ddd2b4a8752b5000951f682.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5857ee2b6e11ecf8fe3d946338b62a05.NginxMailingListEnglish@forum.nginx.org> One correction - /tf/test.php - This file sets the header - header('X-ErrorPage: /tf2/test.php'); and then sends header("HTTP/1.1 401 Unauthorized"); Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222553#msg-222553 From mdounin at mdounin.ru Fri Feb 17 14:18:43 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Feb 2012 18:18:43 +0400 Subject: nginx http auth module query In-Reply-To: <002045ce7ddd2b4a8752b5000951f682.NginxMailingListEnglish@forum.nginx.org> References: <20120217112624.GF67687@mdounin.ru> <002045ce7ddd2b4a8752b5000951f682.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120217141843.GH67687@mdounin.ru> Hello! On Fri, Feb 17, 2012 at 08:30:33AM -0500, karanj wrote: > It doesn't work for me. > > I have 3 files running under HipHop - > 1) /tf/test.php - This file sets the session variable - > $_SESSION['X-ErrorPage']='/tf2/test.php'; > and then sends header("HTTP/1.1 401 Unauthorized"); > 2) /tf2/test2.php - This prints "It works" > 3) /tf2/test.php - This prints "Error Page" > > My config looks like this - > The output on running http://112.11.23.221:8080/tf2/test2.php should be > - "Error Page". But this is not happening. > > The nginx error logs shows the following - > > 2012/02/17 18:52:45 [error] 10103#0: *4 the rewritten URI has a zero > length, client: 122.179.93.88, server: 112.11.23.221, request: "GET > /tf2/test2.php HTTP/1.1", host: "112.11.23.221:8080" > > server { > listen 8080; > server_name 112.11.23.221; > > location / { > auth_request /tf/test.php; > proxy_pass http://127.0.0.1:4247; > error_page 401 = /40x.html; > } > location /tf/test.php{ > proxy_pass http://127.0.0.1:4247; > } > > location = /40x.html { > auth_request_set $err $upstream_http_x_errorpage; > rewrite /40x.html $err; > proxy_pass http://127.0.0.1:4247; > } > } You have to use auth_request_set in the same location with auth_request directive. location / { auth_request /tf/test.php; auth_request_set $err $upstream_http_x_errorpage; ... } ... Maxim Dounin From nginx-forum at nginx.us Fri Feb 17 14:41:40 2012 From: nginx-forum at nginx.us (srk.) Date: Fri, 17 Feb 2012 09:41:40 -0500 (EST) Subject: Unable to configure nginx as reverse proxy Message-ID: Hi I am new to nginx and am trying to configure nginx as reverse proxy. Its broadcasting on the ip but its not forwarding the requests to the backend servers and nothing found in logs as well. Please let me know where i m doing the mistake. My topology and config---- 2 webservers where apache is running and am configuring nginx as reversee proxy in one of them(X.X.X.9). X.X.X.8 is the ip which gets requests from the firewall. upstream backend { server X.X.X.9:80; server X.X.X.10:80; } map $http_host $name { hostnames; default 0; include domainlist; } server { listen X.X.X.8:80; server_name _; location / { proxy_pass http://backend/$http_host/; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222558,222558#msg-222558 From nginx-forum at nginx.us Fri Feb 17 14:53:55 2012 From: nginx-forum at nginx.us (Kraiser) Date: Fri, 17 Feb 2012 09:53:55 -0500 (EST) Subject: nginx 0day exploit for nginx + fastcgi PHP In-Reply-To: References: Message-ID: <2728a853a235564c81231cf4a1de8bd1.NginxMailingListEnglish@forum.nginx.org> Seriously if it doesn't works for lighttppd that use php fcgi and works for nginx it is nginx issue isn't it ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,88845,222561#msg-222561 From nginx-forum at nginx.us Fri Feb 17 15:31:38 2012 From: nginx-forum at nginx.us (karanj) Date: Fri, 17 Feb 2012 10:31:38 -0500 (EST) Subject: nginx http auth module query In-Reply-To: <20120217141843.GH67687@mdounin.ru> References: <20120217141843.GH67687@mdounin.ru> Message-ID: That worked. Awesome and thanks a lot ! -- Karan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222520,222564#msg-222564 From appa at perusio.net Fri Feb 17 15:53:16 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Fri, 17 Feb 2012 15:53:16 +0000 Subject: Unable to configure nginx as reverse proxy In-Reply-To: References: Message-ID: <87d39dmp77.wl%appa@perusio.net> On 17 Fev 2012 14h41 WET, nginx-forum at nginx.us wrote: > Hi > > I am new to nginx and am trying to configure nginx as reverse > proxy. Its broadcasting on the ip but its not forwarding the > requests to the backend servers and nothing found in logs as well. > > Please let me know where i m doing the mistake. > > My topology and config---- > 2 webservers where apache is running and am configuring nginx as > reversee proxy in one of them(X.X.X.9). > X.X.X.8 is the ip which gets requests from the firewall. > > > upstream backend { > server X.X.X.9:80; > server X.X.X.10:80; > } > map $http_host $name { > hostnames; > default 0; > include domainlist; > } > server { > listen X.X.X.8:80; > server_name _; > > location / { > proxy_pass http://backend/$http_host/; > } > } It's unclear, for me at least, what you're trying to achieve. What's the point of setting the $name variable in the above map directive? Do you want to route the request to separate backends based on the Host header? Is that what you want? Please elaborate. --- appa From r at roze.lv Fri Feb 17 16:40:05 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 17 Feb 2012 18:40:05 +0200 Subject: nginx 0day exploit for nginx + fastcgi PHP In-Reply-To: <2728a853a235564c81231cf4a1de8bd1.NginxMailingListEnglish@forum.nginx.org> References: <2728a853a235564c81231cf4a1de8bd1.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Seriously if it doesn't works for lighttppd that use php fcgi and works > for nginx it is nginx issue isn't it ? With certain configuration similar issues are also in apache but it doesn't necessary mean the webserver is at fault. Since php 5.3.9 the fpm sapi has 'security.limit_extensions' (defaults to '.php') which limits the extensions of the main script FPM will allow to parse. It should prevent poor configuration mistakes. rr From nginx-forum at nginx.us Fri Feb 17 18:19:15 2012 From: nginx-forum at nginx.us (srk.) Date: Fri, 17 Feb 2012 13:19:15 -0500 (EST) Subject: Unable to configure nginx as reverse proxy In-Reply-To: <87d39dmp77.wl%appa@perusio.net> References: <87d39dmp77.wl%appa@perusio.net> Message-ID: <35cd403012874d2da036cc20b701b4f8.NginxMailingListEnglish@forum.nginx.org> No i just want nginx to work as a software load balancer. since, there are multiple domains, i just want to map them without the need to specify each and every domain name in the conf. in the file included, i have given entries like this .XYZ.org XYZ.org; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222558,222570#msg-222570 From gt0057 at gmail.com Fri Feb 17 19:13:33 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Fri, 17 Feb 2012 20:13:33 +0100 Subject: Auth user with postgresql In-Reply-To: <20120217125414.GG22076@craic.sysops.org> References: <20120217125414.GG22076@craic.sysops.org> Message-ID: Hello Piotr Sikora and Francis Daly, thanks for the quick reply, The login and password is okay, but in my database passwords are stored in MD5, while the password is passed in the clear as I can solve the problem? Thanks again for your kindness and patience 2012/2/17 Francis Daly : > On Fri, Feb 17, 2012 at 01:39:40PM +0100, Giuseppe Tofoni wrote: > > Hi there, > >> I did several searches on google but found nothing. >> Where is the mistake? >> Thanks for the help > > This looks very like. > > http://forum.nginx.org/read.php?2,220692 > > That may be useful. > > All the best, > > ? ? ? ?f > -- > Francis Daly ? ? ? ?francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From piotr.sikora at frickle.com Fri Feb 17 19:27:40 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Fri, 17 Feb 2012 20:27:40 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> Message-ID: <0B07E57D5566425782EC176CCC2EF7E4@Desktop> Hi, > The login and password is okay, but in my database passwords are > stored in MD5, while the password is passed in the clear as I can > solve the problem? You can use "set_md5" directive from ngx_set_misc module [1]: set_md5 $remote_passwd; /* must be before postgres escape */ postgres_escape $pass $remote_passwd; [1] https://github.com/agentzh/set-misc-nginx-module Best regards, Piotr Sikora < piotr.sikora at frickle.com > From gt0057 at gmail.com Fri Feb 17 20:25:06 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Fri, 17 Feb 2012 21:25:06 +0100 Subject: Auth user with postgresql In-Reply-To: <0B07E57D5566425782EC176CCC2EF7E4@Desktop> References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> Message-ID: Hi, Piotr Sikora I followed your instructions but when I start nginx the following error: nginx: [emerg] the duplicate "remote_passwd" variable in /etc/nginx/nginx.conf:60 ........ location =/t1 { internal; more_set_headers -s 401 'WWW-Authenticate: Basic realm="Cartella test1"'; postgres_escape $user $remote_user; set_md5 $remote_passwd; postgres_escape $pass $remote_passwd; postgres_pass database; postgres_query "SELECT user FROM usertable WHERE user=$user AND pwd=$pass"; postgres_rewrite no_rows 401; postgres_output none; } ............ How can I solve the problem? Many thanks again. Giuseppe p.s.:it is normal to immediately start nginx requires authentication to the database? 2012/2/17 Piotr Sikora : > Hi, > > >> The login and password is okay, but in my database passwords are >> stored in MD5, while the password is passed in the clear as I can >> solve the problem? > > > You can use "set_md5" directive from ngx_set_misc module [1]: > > ? set_md5 $remote_passwd; /* must be before postgres escape */ > ? postgres_escape $pass $remote_passwd; > > [1] https://github.com/agentzh/set-misc-nginx-module > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From piotr.sikora at frickle.com Fri Feb 17 21:04:54 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Fri, 17 Feb 2012 22:04:54 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> Message-ID: <7D43058623FE4D5F84887D5F68709CC0@Desktop> Hi, > nginx: [emerg] the duplicate "remote_passwd" variable in > /etc/nginx/nginx.conf:60 > (...) Right, "$remote_passwd" is read-only, try this instead: set_md5 $md5_passwd $remote_passwd; postgres_escape $pass $md5_passwd; Best regards, Piotr Sikora < piotr.sikora at frickle.com > From ebade at mathbiol.org Fri Feb 17 23:02:52 2012 From: ebade at mathbiol.org (Bade Iriabho) Date: Fri, 17 Feb 2012 17:02:52 -0600 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files In-Reply-To: References: Message-ID: Quintin, that is what I ended up using. It would be nice to get these other issues resolved :) Bade On Thu, Feb 16, 2012 at 9:25 PM, Quintin Par wrote: > Quick Q: > > Epel and CentOS repo still refers to 0.84. Is this under the community > influence to upgrade to 1+? > > -Quintin > > On Fri, Feb 17, 2012 at 3:47 AM, Teck Choon Giam wrote: > >> On Fri, Feb 17, 2012 at 6:04 AM, Bade Iriabho wrote: >> > Hello, >> > >> > Following instructions on http://nginx.org/en/download.html, I tried to >> > install NginX on CentOS 6 and it failed. See the three approaches I used >> > below. >> > >> > First Try >> > >> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >> > $ rpm -Uvh >> > >> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >> > Retrieving >> > >> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >> > error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, key >> ID >> > 7bd9bf62 >> > error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed >> >> Did you try: >> >> # rpm -vvv --rebuilddb >> # yum clean all >> # yum localinstall --nogpgcheck >> >> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >> # yum -y install nginx >> >> Thanks. >> >> Kindest regards, >> Giam Teck Choon >> >> >> > >> > Second Try >> > >> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >> > $ nano /etc/yum.repos.d/CentOS-Nginx.repo >> > # Type the following >> > [nginx] >> > name=nginx repo >> > baseurl=http://nginx.org/packages/centos/6/$basearch/ >> > gpgcheck=0 >> > enabled=1 >> > >> > $ yum install nginx >> > Loaded plugins: fastestmirror >> > Loading mirror speeds from cached hostfile >> > * base: dist1.800hosting.com >> > * extras: centos.mirror.lstn.net >> > * updates: mirror.raystedman.net >> > base >> > | 1.1 kB 00:00 >> > c5-testing >> > | 951 B 00:00 >> > extras >> > | 2.1 kB 00:00 >> > nginx >> > | 1.3 kB 00:00 >> > nginx/primary >> > | 2.6 kB 00:00 >> > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: >> [Errno >> > -3] Error performing checksum >> > Trying other mirror. >> > nginx/primary >> > | 2.6 kB 00:00 >> > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: >> [Errno >> > -3] Error performing checksum >> > Trying other mirror. >> > Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more >> > mirrors to try. >> > >> > Third Try >> > >> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >> > $ wget >> > >> http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm >> > --2012-02-16 15:55:30-- >> > >> http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm >> > Resolving nginx.org... 206.251.255.63 >> > Connecting to nginx.org|206.251.255.63|:80... connected. >> > HTTP request sent, awaiting response... 200 OK >> > Length: 325076 (317K) [application/x-redhat-package-manager] >> > Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' >> > >> > >> 100%[===================================================================================================================================================================================================>] >> > 325,076 944K/s in 0.3s >> > >> > 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' >> saved >> > [325076/325076] >> > >> > $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm >> > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 signature: >> BAD, >> > key ID 7bd9bf62 >> > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed >> > >> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >> > >> > So can someone look into this, I am not sure who to ask. >> > >> > Regards, >> > Bade I. >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gt0057 at gmail.com Fri Feb 17 23:29:32 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Sat, 18 Feb 2012 00:29:32 +0100 Subject: Auth user with postgresql In-Reply-To: <7D43058623FE4D5F84887D5F68709CC0@Desktop> References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: Hi Hi, I have no errors, nginx starts correctly, but the password are calculated differently: password : pippo created with htpasswd h7n37SzKs.aO6 (test with auth_basic_user_file is ok) created with set_md5 0c88028bf3aa6a6a143ed846f2be1ea4 error (STATEMENT: SELECT user FROM usertable WHERE user='donalduck' AND pwd='0c88028bf3aa6a6a143ed846f2be1ea4') Thanks again Giuseppe 2012/2/17 Piotr Sikora : > Hi, > >> nginx: [emerg] the duplicate "remote_passwd" variable in >> /etc/nginx/nginx.conf:60 >> (...) > > > Right, "$remote_passwd" is read-only, try this instead: > > ? set_md5 ? ? ? ? ? ? ? ?$md5_passwd $remote_passwd; > ? postgres_escape ?$pass $md5_passwd; > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From andrew at andrewloe.com Sat Feb 18 00:03:46 2012 From: andrew at andrewloe.com (W. Andrew Loe III) Date: Fri, 17 Feb 2012 16:03:46 -0800 Subject: nginx worker stuck, potential mod_zip bug Message-ID: I'm running an older version of nginx (0.7.67) with mod_zip 1.1.6. I believe we have a found a rare bug, I'm trying to figure out of it is with mod_zip or with nginx, and if upgrading nginx will potentially resolve it. The symptom is a worker process getting "stuck" at 100% CPU, leaving all connections in CLOSE_WAIT, and servicing no requests. It appears that the trigger for this is downloading an archive from mod_zip, but we have never been able to reproduce it, only observe it in production. I was finally able to catch one of the workers and get a backtrace from gdb: $ cat backtrace.log | addr2line -e /usr/lib/debug/usr/sbin/nginx -f ngx_http_postpone_filter /build/buildd/nginx-0.7.67/src/http/ngx_http_postpone_filter_module.c:125 ngx_http_ssi_body_filter /build/buildd/nginx-0.7.67/src/http/modules/ngx_http_ssi_filter_module.c:430 ngx_http_charset_body_filter /build/buildd/nginx-0.7.67/src/http/modules/ngx_http_charset_filter_module.c:643 ngx_http_zip_body_filter /build/buildd/nginx-0.7.67/modules/mod_zip-1.1.6/ngx_http_zip_module.c:336 ngx_output_chain /build/buildd/nginx-0.7.67/src/core/ngx_output_chain.c:58 ngx_http_copy_filter /build/buildd/nginx-0.7.67/src/http/ngx_http_copy_filter_module.c:114 ngx_http_range_body_filter /build/buildd/nginx-0.7.67/src/http/modules/ngx_http_range_filter_module.c:549 ngx_http_output_filter /build/buildd/nginx-0.7.67/src/http/ngx_http_core_module.c:1716 ngx_event_pipe_write_to_downstream /build/buildd/nginx-0.7.67/src/event/ngx_event_pipe.c:627 ngx_http_upstream_process_upstream /build/buildd/nginx-0.7.67/src/http/ngx_http_upstream.c:2509 ngx_http_upstream_handler /build/buildd/nginx-0.7.67/src/http/ngx_http_upstream.c:844 ngx_event_process_posted /build/buildd/nginx-0.7.67/src/event/ngx_event_posted.c:30 ngx_worker_process_cycle /build/buildd/nginx-0.7.67/src/os/unix/ngx_process_cycle.c:793 ngx_spawn_process /build/buildd/nginx-0.7.67/src/os/unix/ngx_process.c:201 ngx_reap_children /build/buildd/nginx-0.7.67/src/os/unix/ngx_process_cycle.c:612 main /build/buildd/nginx-0.7.67/src/core/nginx.c:396 ?? ??:0 _start ??:0 ?? ??:0 ?? ??:0 ?? ??:0 ?? ??:0 ?? ??:0 ?? ??:0 I also had the log in debug mode (it is incredibly large ~ 150GB uncompressed) and it is completely filled with the following little loop: 2012/02/17 11:58:45 [debug] 10150#0: *1888 mod_zip: entering subrequest body filter 2012/02/17 11:58:45 [debug] 10150#0: *1888 http postpone filter "/s3/bucket/key" 0000000000000000 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: 0 "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write busy: 8192 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write: out:0000000000000000, f:1 2012/02/17 11:58:45 [debug] 10150#0: *1888 http output filter "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 mod_zip: entering subrequest body filter 2012/02/17 11:58:45 [debug] 10150#0: *1888 http postpone filter "/s3/bucket/key" 0000000000000000 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: 0 "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write busy: 8192 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write: out:0000000000000000, f:1 2012/02/17 11:58:45 [debug] 10150#0: *1888 http output filter "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 mod_zip: entering subrequest body filter 2012/02/17 11:58:45 [debug] 10150#0: *1888 http postpone filter "/s3/bucket/key" 0000000000000000 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: 0 "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write busy: 8192 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write: out:0000000000000000, f:1 2012/02/17 11:58:45 [debug] 10150#0: *1888 http output filter "/s3/bucket/key" 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: "/s3/bucket/key" Am I right in assuming this is a bug in mod_zip, it looks like a buffer is never being drained to the client? From piotr.sikora at frickle.com Sat Feb 18 00:04:51 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Sat, 18 Feb 2012 01:04:51 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: Hi, > I have no errors, nginx starts correctly, but the password are > calculated differently: > > password : pippo > > created with htpasswd h7n37SzKs.aO6 (test with > auth_basic_user_file is ok) > created with set_md5 0c88028bf3aa6a6a143ed846f2be1ea4 error Uhm, and what value do you have in the database? MD5("pippo") = 0c88028bf3aa6a6a143ed846f2be1ea4 so it would seem that you don't store MD5 hashes after all? Best regards, Piotr Sikora < piotr.sikora at frickle.com > From quintinpar at gmail.com Sat Feb 18 02:24:45 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 18 Feb 2012 07:54:45 +0530 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files In-Reply-To: References: Message-ID: Yes. I understand. On Sat, Feb 18, 2012 at 4:32 AM, Bade Iriabho wrote: > Quintin, that is what I ended up using. It would be nice to get these > other issues resolved :) > > Bade > > > On Thu, Feb 16, 2012 at 9:25 PM, Quintin Par wrote: > >> Quick Q: >> >> Epel and CentOS repo still refers to 0.84. Is this under the community >> influence to upgrade to 1+? >> >> -Quintin >> >> On Fri, Feb 17, 2012 at 3:47 AM, Teck Choon Giam > > wrote: >> >>> On Fri, Feb 17, 2012 at 6:04 AM, Bade Iriabho >>> wrote: >>> > Hello, >>> > >>> > Following instructions on http://nginx.org/en/download.html, I tried >>> to >>> > install NginX on CentOS 6 and it failed. See the three approaches I >>> used >>> > below. >>> > >>> > First Try >>> > >>> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >>> > $ rpm -Uvh >>> > >>> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >>> > Retrieving >>> > >>> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >>> > error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, >>> key ID >>> > 7bd9bf62 >>> > error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed >>> >>> Did you try: >>> >>> # rpm -vvv --rebuilddb >>> # yum clean all >>> # yum localinstall --nogpgcheck >>> >>> http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm >>> # yum -y install nginx >>> >>> Thanks. >>> >>> Kindest regards, >>> Giam Teck Choon >>> >>> >>> > >>> > Second Try >>> > >>> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >>> > $ nano /etc/yum.repos.d/CentOS-Nginx.repo >>> > # Type the following >>> > [nginx] >>> > name=nginx repo >>> > baseurl=http://nginx.org/packages/centos/6/$basearch/ >>> > gpgcheck=0 >>> > enabled=1 >>> > >>> > $ yum install nginx >>> > Loaded plugins: fastestmirror >>> > Loading mirror speeds from cached hostfile >>> > * base: dist1.800hosting.com >>> > * extras: centos.mirror.lstn.net >>> > * updates: mirror.raystedman.net >>> > base >>> > | 1.1 kB 00:00 >>> > c5-testing >>> > | 951 B 00:00 >>> > extras >>> > | 2.1 kB 00:00 >>> > nginx >>> > | 1.3 kB 00:00 >>> > nginx/primary >>> > | 2.6 kB 00:00 >>> > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: >>> [Errno >>> > -3] Error performing checksum >>> > Trying other mirror. >>> > nginx/primary >>> > | 2.6 kB 00:00 >>> > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: >>> [Errno >>> > -3] Error performing checksum >>> > Trying other mirror. >>> > Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more >>> > mirrors to try. >>> > >>> > Third Try >>> > >>> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >>> > $ wget >>> > >>> http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm >>> > --2012-02-16 15:55:30-- >>> > >>> http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm >>> > Resolving nginx.org... 206.251.255.63 >>> > Connecting to nginx.org|206.251.255.63|:80... connected. >>> > HTTP request sent, awaiting response... 200 OK >>> > Length: 325076 (317K) [application/x-redhat-package-manager] >>> > Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' >>> > >>> > >>> 100%[===================================================================================================================================================================================================>] >>> > 325,076 944K/s in 0.3s >>> > >>> > 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' >>> saved >>> > [325076/325076] >>> > >>> > $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm >>> > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 >>> signature: BAD, >>> > key ID 7bd9bf62 >>> > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed >>> > >>> +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ >>> > >>> > So can someone look into this, I am not sure who to ask. >>> > >>> > Regards, >>> > Bade I. >>> > >>> > _______________________________________________ >>> > nginx mailing list >>> > nginx at nginx.org >>> > http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Sat Feb 18 03:07:47 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 17 Feb 2012 22:07:47 -0500 Subject: nginx-1.1.15 In-Reply-To: <20120215144020.GM67687@mdounin.ru> References: <20120215144020.GM67687@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.1.15 For Windows http://goo.gl/4zVP7 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Thank you, Kevin -- Kevin Worthington kworthington *@~ #gmail} [dot) {com] http://www.kevinworthington.com/ On Wed, Feb 15, 2012 at 9:40 AM, Maxim Dounin wrote: > Changes with nginx 1.1.15 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?15 Feb 2012 > > ? ?*) Feature: the "disable_symlinks" directive. > > ? ?*) Feature: the "proxy_cookie_domain" and "proxy_cookie_path" > ? ? ? directives. > > ? ?*) Bugfix: nginx might log incorrect error "upstream prematurely closed > ? ? ? connection" instead of correct "upstream sent too big header" one. > ? ? ? Thanks to Feibo Li. > > ? ?*) Bugfix: nginx could not be built with the ngx_http_perl_module if the > ? ? ? --with-openssl option was used. > > ? ?*) Bugfix: internal redirects to named locations were not limited. > > ? ?*) Bugfix: calling $r->flush() multiple times might cause errors in the > ? ? ? ngx_http_gzip_filter_module. > > ? ?*) Bugfix: temporary files might be not removed if the "proxy_store" > ? ? ? directive were used with SSI includes. > > ? ?*) Bugfix: in some cases non-cacheable variables (such as the $args > ? ? ? variable) returned old empty cached value. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if too > ? ? ? many SSI subrequests were issued simultaneously; the bug had appeared > ? ? ? in 0.7.25. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From quintinpar at gmail.com Sat Feb 18 07:36:46 2012 From: quintinpar at gmail.com (Quintin Par) Date: Sat, 18 Feb 2012 13:06:46 +0530 Subject: Old thread: Cache for non-cookie users andfresh for cookie users In-Reply-To: References: <87vcneqlly.wlappa@perusio.net> Message-ID: Not directly related, but here?s an entry from 37Signal?s David who talks about invalidating individual cache entries and keeping it clean as opposing to a whole purge. http://37signals.com/svn/posts/3112-how-basecamp-next-got-to-be-so-damn-fast-without-using-much-client-side-ui This is the way they keep cache fresh in with Basecamp, though it is in memcached. - Quintin On Sat, Feb 11, 2012 at 11:53 PM, Max wrote: > > > > 11 ??????? 2012, 04:07 ?? Ant?nio P. P. Almeida : > > On 10 Fev 2012 17h47 WET, nginxyz at mail.ru wrote: > > > > > > > > The default behaviour is not to cache POST method request responses, > > > but I turned caching of POST method request responses ON, so I had > > > to make sure the cache is bypassed for POST method requests (but > > > not for GET or HEAD method requests!). All POST method requests > > > are passed on to the backend without checking for a match in the > > > cache, but - CONTRARY to the default behavior - all POST method > > > request responses are cached. > > > > > > Without the @post_and_refresh_cache location block and without > > > the proxy_cache_bypass directive, nginx would check the cache > > > and return the content from the cache (put there by a previous > > > GET request response, for example) and would not pass the POST > > > method request on to the backend, which is definitely not what > > > you want in this case. > > > > If what the OP wanted was to distinguish between cached POST and GET > > request responses then just add $request_method to the cache key. > > That's not what the OP wanted, and that's not what the approach > I described does. The OP wants to be able to invalidate cache entries > on demand without using 3rd party modules. Since, AFAIK, there's no > way to do that without using 3rd party modules, the alternative is to > make sure the cache is as fresh as possible. This can be done by making > sure POST method requests refresh the appropriate cache entries > automatically and/or by having special location blocks for refreshing > specific cache entries on demand. > > Max > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gt0057 at gmail.com Sat Feb 18 10:49:00 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Sat, 18 Feb 2012 11:49:00 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: Hi, In my database passwords are all stored in MD5 has been created using PHP. The value in the database for user pippo is h7n37SzKs.aO6 and there are no problems with APACHE I used the same password with python and there are no problems. I would like to use nginx but if I do not solve the problem of the passwords can not leave APACHE and PHP. Many Many thanks for your patience Giuseppe 2012/2/18 Piotr Sikora : > Hi, > >> I have no errors, nginx starts correctly, but the password ?are >> calculated differently: >> >> password : pippo >> >> created with ?htpasswd ? ? ? h7n37SzKs.aO6 ? ?(test with >> auth_basic_user_file is ok) >> created with set_md5 ? ? ? ? 0c88028bf3aa6a6a143ed846f2be1ea4 ? error > > > Uhm, and what value do you have in the database? > > ? MD5("pippo") = 0c88028bf3aa6a6a143ed846f2be1ea4 > > so it would seem that you don't store MD5 hashes after all? > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mizanniit at gmail.com Sat Feb 18 12:04:15 2012 From: mizanniit at gmail.com (Mizan) Date: Sat, 18 Feb 2012 18:04:15 +0600 Subject: Complexity of nginx Message-ID: Hello, Would tell me how to reduce the number of spawned php-fpm processes in order to reduce the load in your server? Results are given below: ================== root at server [~]# top -c top - 10:56:37 up 6 days, 2:51, 1 user, load average: 11.07, 10.88, 11.74 Tasks: 420 total, 1 running, 419 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.3%sy, 0.9%ni, 85.8%id, 12.2%wa, 0.1%hi, 0.6%si, 0.0%st Mem: 12289752k total, 4846176k used, 7443576k free, 267356k buffers Swap: 14352376k total, 0k used, 14352376k free, 3884864k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8656 root 33 18 37388 6052 1364 D 1.0 0.0 0:00.03 cpanellogd - updating bandwidth 10457 nobody 25 10 136m 9.8m 7128 S 0.7 0.1 0:02.25 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10855 nobody 15 0 46000 4324 916 D 0.7 0.0 0:07.40 nginx: worker process 10856 nobody 16 0 46140 4432 916 D 0.7 0.0 0:07.63 nginx: worker process 10857 nobody 15 0 45472 3800 916 S 0.7 0.0 0:07.47 nginx: worker process 10861 nobody 15 0 45968 4192 916 D 0.7 0.0 0:06.86 nginx: worker process 12656 nobody 25 10 136m 11m 8328 S 0.7 0.1 0:06.60 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12697 nobody 25 10 136m 11m 8508 S 0.7 0.1 0:06.57 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 19281 root 15 0 13024 1404 820 R 0.7 0.0 0:10.74 top -c 1116 root 10 -5 0 0 0 S 0.3 0.0 3:54.81 [kjournald] 2841 nobody 25 10 136m 10m 7772 S 0.3 0.1 0:02.82 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 3256 nobody 25 10 136m 10m 8000 S 0.3 0.1 0:04.98 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10452 nobody 26 10 136m 10m 7092 S 0.3 0.1 0:02.23 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10453 nobody 25 10 136m 10m 7344 S 0.3 0.1 0:02.32 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10460 nobody 25 10 136m 9.9m 7164 S 0.3 0.1 0:02.28 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10461 nobody 25 10 136m 9m 7260 S 0.3 0.1 0:02.48 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10466 nobody 25 10 136m 10m 7400 S 0.3 0.1 0:02.30 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10831 nobody 26 10 136m 10m 8032 S 0.3 0.1 0:05.07 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 10848 nobody 16 0 45376 3664 916 D 0.3 0.0 0:07.18 nginx: worker process 10850 nobody 15 0 45568 3828 916 S 0.3 0.0 0:07.08 nginx: worker process 10851 nobody 15 0 45796 4068 916 S 0.3 0.0 0:07.18 nginx: worker process 10853 nobody 15 0 46192 4524 916 S 0.3 0.0 0:07.30 nginx: worker process 10854 nobody 15 0 45688 3888 916 S 0.3 0.0 0:07.13 nginx: worker process 10859 nobody 15 0 46076 4468 916 D 0.3 0.0 0:07.65 nginx: worker process 12633 nobody 25 10 136m 11m 8372 S 0.3 0.1 0:06.57 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12635 nobody 25 10 136m 11m 8656 S 0.3 0.1 0:06.65 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12637 nobody 25 10 136m 10m 7904 S 0.3 0.1 0:06.47 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12638 nobody 25 10 136m 10m 8088 S 0.3 0.1 0:06.45 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12641 nobody 25 10 136m 10m 8260 S 0.3 0.1 0:06.75 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12648 nobody 25 10 136m 11m 8272 S 0.3 0.1 0:06.44 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12650 nobody 25 10 136m 11m 8148 S 0.3 0.1 0:06.44 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12651 nobody 25 10 136m 11m 8268 S 0.3 0.1 0:06.55 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12652 nobody 25 10 136m 10m 8260 S 0.3 0.1 0:06.60 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12653 nobody 25 10 137m 11m 8380 S 0.3 0.1 0:06.49 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12654 nobody 25 10 136m 10m 7992 S 0.3 0.1 0:06.59 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12658 nobody 25 10 136m 11m 8640 S 0.3 0.1 0:06.74 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12663 nobody 25 10 136m 11m 8128 S 0.3 0.1 0:06.33 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12668 nobody 26 10 137m 11m 8600 S 0.3 0.1 0:06.76 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12669 nobody 25 10 136m 10m 8332 S 0.3 0.1 0:06.42 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12671 nobody 25 10 136m 10m 8216 S 0.3 0.1 0:06.63 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12674 nobody 25 10 136m 11m 8264 S 0.3 0.1 0:06.72 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf 12676 nobody 25 10 136m 11m 8460 S 0.3 0.1 0:06.39 /usr/local/nginx-php/bin/php-cgi --fpm --fpm-config /usr/local/nginx-php/etc/php-fpm.conf root at server [~]# ================== Which variables can be optimized to make the server stable. Also, domains usually have high hits. So far as above circumstances I ask for your co-operation. Thank you Mizanur Rahman From edho at myconan.net Sat Feb 18 12:37:43 2012 From: edho at myconan.net (Edho Arief) Date: Sat, 18 Feb 2012 19:37:43 +0700 Subject: Complexity of nginx In-Reply-To: References: Message-ID: Hello! On Sat, Feb 18, 2012 at 7:04 PM, Mizan wrote: > Hello, > Would tell me how to reduce the number of spawned php-fpm processes in > order to reduce the load in your server? Results are given below: With the amount of ram your server have, you shouldn't reduce your php-fpm processes. If the server isn't serving pages fast enough, try increasing worker processes and php max children. And your IO wait is rather high; check if there's any disk problem. More advanced caching configuration should also help reducing disk load. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From sb at waeme.net Sat Feb 18 14:11:29 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Sat, 18 Feb 2012 18:11:29 +0400 Subject: Someone need to update the latest stable CentOS (Possibly RedHat) NginX files In-Reply-To: References: Message-ID: On 17.02.2012, at 2:04, Bade Iriabho wrote: > Hello, > > Following instructions on http://nginx.org/en/download.html, I tried to install NginX on CentOS 6 and it failed. See the three approaches I used below. Repository is ok, but yours CentOS have nonstandard and bit paranoid setup. I do not even know how to force signature cheking with rpm, it is optional on CentOS/RHEL and produce only warning. Try to install with rpm -Uvh --nodigest --nosignature http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm then run rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-nginx then yum install nginx > > First Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ rpm -Uvh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > Retrieving http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm > error: /var/tmp/rpm-xfer.4j3Y9u: Header V4 RSA/SHA1 signature: BAD, key ID 7bd9bf62 > error: /var/tmp/rpm-xfer.4j3Y9u cannot be installed > > Second Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ nano /etc/yum.repos.d/CentOS-Nginx.repo > # Type the following > [nginx] > name=nginx repo > baseurl=http://nginx.org/packages/centos/6/$basearch/ > gpgcheck=0 > enabled=1 > > $ yum install nginx > Loaded plugins: fastestmirror > Loading mirror speeds from cached hostfile > * base: dist1.800hosting.com > * extras: centos.mirror.lstn.net > * updates: mirror.raystedman.net > base | 1.1 kB 00:00 > c5-testing | 951 B 00:00 > extras | 2.1 kB 00:00 > nginx | 1.3 kB 00:00 > nginx/primary | 2.6 kB 00:00 > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum > Trying other mirror. > nginx/primary | 2.6 kB 00:00 > http://nginx.org/packages/centos/6/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum > Trying other mirror. > Error: failure: repodata/primary.xml.gz from nginx: [Errno 256] No more mirrors to try. > > Third Try > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > $ wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > --2012-02-16 15:55:30-- http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.0.12-1.el6.ngx.x86_64.rpm > Resolving nginx.org... 206.251.255.63 > Connecting to nginx.org|206.251.255.63|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 325076 (317K) [application/x-redhat-package-manager] > Saving to: `nginx-1.0.12-1.el6.ngx.x86_64.rpm' > > 100%[===================================================================================================================================================================================================>] 325,076 944K/s in 0.3s > > 2012-02-16 15:55:31 (944 KB/s) - `nginx-1.0.12-1.el6.ngx.x86_64.rpm' saved [325076/325076] > > $ rpm -ivh nginx-1.0.12-1.el6.ngx.x86_64.rpm > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm: Header V4 RSA/SHA1 signature: BAD, key ID 7bd9bf62 > error: nginx-1.0.12-1.el6.ngx.x86_64.rpm cannot be installed > +==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+ > > So can someone look into this, I am not sure who to ask. > > Regards, > Bade I. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From piotr.sikora at frickle.com Sat Feb 18 14:18:21 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Sat, 18 Feb 2012 15:18:21 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: Hi, > In my database passwords are all stored in MD5 has been created using PHP. > The value in the database for user pippo is h7n37SzKs.aO6 and there > are no problems with APACHE But "h7n37SzKs.aO6" is not 128-bit value, so whatever it is, it cannot be MD5. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From gt0057 at gmail.com Sat Feb 18 18:45:28 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Sat, 18 Feb 2012 19:45:28 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: Hi, First, Sorry about the time you've lost for my problem. Hi reason, the password is not in MD5, but rather in DES (PHP --> crypt($verpas, CRYPT_STD_DES) What should I use instead of set_md5 ? DES on this page http://wiki.nginx.org/HttpSetMiscModule#Installation is never mentioned Thanks again Giuseppe 2012/2/18 Piotr Sikora : > Hi, > >> In my database passwords are all stored in MD5 has been created using PHP. >> The value in the database for user pippo is h7n37SzKs.aO6 and there >> are no problems with APACHE > > > But "h7n37SzKs.aO6" is not 128-bit value, so whatever it is, it cannot be > MD5. > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From piotr.sikora at frickle.com Sat Feb 18 19:11:29 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Sat, 18 Feb 2012 20:11:29 +0100 Subject: Auth user with postgresql In-Reply-To: References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> Message-ID: <239288B8417143FE922B11C7222A1C88@Desktop> Hi, > Hi reason, the password is not in MD5, but rather in DES (PHP --> > crypt($verpas, CRYPT_STD_DES) > What should I use instead of set_md5 ? > DES on this page http://wiki.nginx.org/HttpSetMiscModule#Installation > is never mentioned I'm not aware of any module that would offer crypt() hashing for variables in nginx.conf. On the bright side, PostgreSQL's crypt() [1] should help you. Could you please try: postgres_query "SELECT user FROM usertable "WHERE user=$user AND pwd=crypt($pass, pwd)"; [1] http://www.postgresql.org/docs/9.1/static/pgcrypto.html Best regards, Piotr Sikora < piotr.sikora at frickle.com > From hyperstruct at gmail.com Sat Feb 18 20:43:11 2012 From: hyperstruct at gmail.com (Massimiliano Mirra) Date: Sat, 18 Feb 2012 21:43:11 +0100 Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: About this: > location /proxied-stuff { > proxy_set_header Accept-Encoding gzip; > proxy_cache_key "$scheme$host$request_uri"; > proxy_cache_valid 2d; > proxy_cache myapp_cache; > proxy_pass http://127.0.0.1:85; > } > I was hoping that gunzip'ping for clients that don't support compression would be as simple as adding the following inside the above block: if ($http_accept_encoding !~* gzip) { gunzip on; } But when nginx configuration is reloaded, I get: "nginx: [emerg] "gunzip" directive is not allowed here". I suppose I could rewrite the request to an internal location, then within that location's block re-set the proxy_cache_key accordingly. But perhaps there's an easier way? Massimiliano -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Feb 18 20:43:59 2012 From: nginx-forum at nginx.us (Samael) Date: Sat, 18 Feb 2012 15:43:59 -0500 (EST) Subject: Partial downloads Message-ID: Hello, guys, Recently, I've started seeing a strange problem - sometimes it happens that nginx only transfers only a small part of the file and the download gets interrupted. When I debug it, the issue narrows down to "client timed out (110: Connection timed out) while sending response to client", but this is certainly not the case, as it happens very quickly (almost immediately after starting the download, in a less than a second). I don't see anything unusual with tcpdump. I was wondering whether some of you has an idea what can be the root cause for this strange behaviour? Thanks in advance. nginx version: nginx/1.1.15 built by gcc 4.1.2 20080704 (Red Hat 4.1.2-51) TLS SNI support disabled configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/nginx.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-debug --with-http_ssl_module --with-http_geoip_module --with-http_sub_module --with-http_realip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_dav_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_uwsgi_module --with-file-aio --with-cc-opt='-O2 -g -m32 -march=i686 -mtune=generic -fasynchronous-unwind-tables' 2.6.18-274.7.1.el5PAE i686 aio on sendfile on directio on tcp_nodelay off tcp_nopush off open_file_cache on client_body_timeout 10 client_header_timeout 10 send_timeout 10 keepalive_timeout 30 max_ranges 5 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222614,222614#msg-222614 From mdounin at mdounin.ru Sat Feb 18 21:49:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 01:49:00 +0400 Subject: Partial downloads In-Reply-To: References: Message-ID: <20120218214859.GL67687@mdounin.ru> Hello! On Sat, Feb 18, 2012 at 03:43:59PM -0500, Samael wrote: > Hello, guys, > > Recently, I've started seeing a strange problem - sometimes it happens > that nginx only transfers only a small part of the file and the download > gets interrupted. When I debug it, the issue narrows down to "client > timed out (110: Connection timed out) while sending response to client", > but this is certainly not the case, as it happens very quickly (almost > immediately after starting the download, in a less than a second). I > don't see anything unusual with tcpdump. I was wondering whether some of > you has an idea what can be the root cause for this strange behaviour? > Thanks in advance. Could you please provide debug log? See http://wiki.nginx.org/Debugging for details. Maxim Dounin From mdounin at mdounin.ru Sat Feb 18 22:11:01 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 02:11:01 +0400 Subject: nginx worker stuck, potential mod_zip bug In-Reply-To: References: Message-ID: <20120218221101.GM67687@mdounin.ru> Hello! On Fri, Feb 17, 2012 at 04:03:46PM -0800, W. Andrew Loe III wrote: > I'm running an older version of nginx (0.7.67) with mod_zip 1.1.6. I > believe we have a found a rare bug, I'm trying to figure out of it is > with mod_zip or with nginx, and if upgrading nginx will potentially > resolve it. > > The symptom is a worker process getting "stuck" at 100% CPU, leaving > all connections in CLOSE_WAIT, and servicing no requests. It appears > that the trigger for this is downloading an archive from mod_zip, but > we have never been able to reproduce it, only observe it in > production. [...] > I also had the log in debug mode (it is incredibly large ~ 150GB > uncompressed) and it is completely filled with the following little > loop: > > > 2012/02/17 11:58:45 [debug] 10150#0: *1888 mod_zip: entering > subrequest body filter > 2012/02/17 11:58:45 [debug] 10150#0: *1888 http postpone filter > "/s3/bucket/key" 0000000000000000 > 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: 0 "/s3/bucket/key" > 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write busy: 8192 > 2012/02/17 11:58:45 [debug] 10150#0: *1888 pipe write: out:0000000000000000, f:1 > 2012/02/17 11:58:45 [debug] 10150#0: *1888 http output filter "/s3/bucket/key" > 2012/02/17 11:58:45 [debug] 10150#0: *1888 copy filter: "/s3/bucket/key" > 2012/02/17 11:58:45 [debug] 10150#0: *1888 mod_zip: entering > subrequest body filter > 2012/02/17 11:58:45 [debug] 10150#0: *1888 http postpone filter > "/s3/bucket/key" 0000000000000000 [...] > Am I right in assuming this is a bug in mod_zip, it looks like a > buffer is never being drained to the client? Quick look though mod_zip sources suggests it doesn't do anything for this request (subrequest). I would rather think you've hit something like this problem: http://trac.nginx.org/nginx/changeset/4136/nginx Try upgrading to 1.1.4+/1.0.7+ to see if it helps. Maxim Dounin From nginx-forum at nginx.us Sat Feb 18 22:13:37 2012 From: nginx-forum at nginx.us (Samael) Date: Sat, 18 Feb 2012 17:13:37 -0500 (EST) Subject: Partial downloads In-Reply-To: <20120218214859.GL67687@mdounin.ru> References: <20120218214859.GL67687@mdounin.ru> Message-ID: <8fc20d093560c824e662c20994eecc6f.NginxMailingListEnglish@forum.nginx.org> Of course - http://pastebin.com/raw.php?i=RnZv58Zq Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222614,222619#msg-222619 From mdounin at mdounin.ru Sat Feb 18 22:37:17 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 02:37:17 +0400 Subject: Partial downloads In-Reply-To: <8fc20d093560c824e662c20994eecc6f.NginxMailingListEnglish@forum.nginx.org> References: <20120218214859.GL67687@mdounin.ru> <8fc20d093560c824e662c20994eecc6f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120218223717.GN67687@mdounin.ru> Hello! On Sat, Feb 18, 2012 at 05:13:37PM -0500, Samael wrote: > Of course - http://pastebin.com/raw.php?i=RnZv58Zq The log in question suggests there is just normal send_timeout: ... 2012/02/18 22:03:36 [debug] 4368#0: *1 sendfile: @2273599 684918465 2012/02/18 22:03:36 [debug] 4368#0: *1 sendfile: 96360, @2273599 96360:684918465 2012/02/18 22:03:36 [debug] 4368#0: *1 http write filter 0835AC9C 2012/02/18 22:03:36 [debug] 4368#0: *1 http copy filter: -2 "/test/bigfile?" 2012/02/18 22:03:36 [debug] 4368#0: *1 http writer output filter: -2, "/test/bigfile?" 2012/02/18 22:03:36 [debug] 4368#0: *1 event timer: 23, old: 2450531909, new: 2450532133 2012/02/18 22:03:49 [debug] 4368#0: *1 event timer del: 23: 2450531909 2012/02/18 22:03:49 [debug] 4368#0: *1 http run request: "/test/bigfile?" 2012/02/18 22:03:49 [debug] 4368#0: *1 http writer handler: "/test/bigfile?" 2012/02/18 22:03:49 [info] 4368#0: *1 client timed out (110: Connection timed out) while sending response to client, client: 1.1.1.2, server: 1.1.1.1, request: "GET /test/bigfile HTTP/1.1", host: "1.1.1.1", referrer: "http://1.1.1.1/test/" 2012/02/18 22:03:49 [debug] 4368#0: *1 http finalize request: 408, "/test/bigfile?" a:1, c:1 ... Note that nothing happens with the connection in question between "22:03:36" and "22:03:49". Looks ok unless you have additional info which suggests that data from socket buffer was sent to the client and it's nginx fault it doesn't sent more data. It's slightly off from expected 10 seconds as per your config part you've provided, though this is likely related to the fact that your nginx processes are disk-bound and nginx wasn't able to process timer in time. You may want to take a look at sendfile_max_chunk directive, see http://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk Maxim Dounin From nginx-forum at nginx.us Sat Feb 18 23:23:43 2012 From: nginx-forum at nginx.us (Samael) Date: Sat, 18 Feb 2012 18:23:43 -0500 (EST) Subject: Partial downloads In-Reply-To: <20120218223717.GN67687@mdounin.ru> References: <20120218223717.GN67687@mdounin.ru> Message-ID: <2d2418a3ecd9d748bd1449d748528ffd.NginxMailingListEnglish@forum.nginx.org> Thank you, this was useful. I've tried it again and this time I correlated the events in the log and the tcpdump capture. It seems that the client is requesting a window update, which doesn't receive an answer and triggers the send timeout. I think that this is the issue. Thank you very much for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222614,222621#msg-222621 From mdounin at mdounin.ru Sat Feb 18 23:31:05 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 03:31:05 +0400 Subject: Can proxy_cache gzip cached content? In-Reply-To: References: Message-ID: <20120218233105.GO67687@mdounin.ru> Hello! On Sat, Feb 18, 2012 at 09:43:11PM +0100, Massimiliano Mirra wrote: > About this: > > > > location /proxied-stuff { > > proxy_set_header Accept-Encoding gzip; > > proxy_cache_key "$scheme$host$request_uri"; > > proxy_cache_valid 2d; > > proxy_cache myapp_cache; > > proxy_pass http://127.0.0.1:85; > > } > > > > I was hoping that gunzip'ping for clients that don't support compression > would be as simple as adding the following inside the above block: > > if ($http_accept_encoding !~* gzip) { > gunzip on; > } > > But when nginx configuration is reloaded, I get: "nginx: [emerg] "gunzip" > directive is not allowed here". > > I suppose I could rewrite the request to an internal location, then within > that location's block re-set the proxy_cache_key accordingly. But perhaps > there's an easier way? Yes. The easier way is to just write gunzip on; It will gunzip responses for clients which don't support gzip (as per Accept-Encoding and gzip_http_version/gzip_proxied/gzip_disabled, i.e. the same checks as done for gzip and gzip_static). Maxim Dounin From nginxyz at mail.ru Sun Feb 19 01:12:17 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Sun, 19 Feb 2012 05:12:17 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: <20120217111904.GD67687@mdounin.ru> References: <20120217111904.GD67687@mdounin.ru> Message-ID: 17 ??????? 2012, 15:19 ?? Maxim Dounin <mdounin at mdounin.ru>: > On Fri, Feb 17, 2012 at 12:28:14PM +0400, Max wrote: > > Maxim, you haven't even taken a look at my patch, have you? Because > > if you had, you wouldn't have made such unsubstantiated claims. > > I have, despite the fact that it was provided as a link only. I usually send in patches on postcards, but I had run out of postcards. I'm glad this Internet thing seems to work, although your replies seem to indicate your download request had header_only set to 1. :-P > I stand corrected. Your patch broke only $request variable, not > the $request_method (which always comes from the main request). You're wrong, my patch did not break anything. Go ahead and put any variables in those /private/ and /auth location blocks and see for yourself. I know it must be difficult to decipher, but could you try to guess what the following part of my patch does? 64 + /* 65 + * 1) Allocate a new request line string 66 + * (4 extra bytes for future compatibility just in case 67 + * a single letter HTTP request method is introduced). 68 + */ 69 + 70 + request_line.data = ngx_pcalloc(r->pool, sr->request_line.len + 4); 71 + if (request_line.data == NULL) { 72 + return NGX_ERROR; 73 + } Feel free to verify this for yourself AGAIN: location /private/ { auth_request /auth; set $request_private $request; set $request_uri_private $request_uri; set $request_method_private $request_method; } location = /auth { set $request_auth $request; set $request_uri_auth $request_uri; set $request_method_auth $request_method; } Here's the debug log output: 1) Inside the /private/ location block: http script var: "GET /private/test HTTP/1.1" http script set $request_private http script var: "/private/test" http script set $request_uri_private http script var: "GET" http script set $request_method_private 2) Inside the /auth location block: http script var: "GET /private/test HTTP/1.1" http script set $request_auth http script var: "/private/test" http script set $request_uri_auth http script var: "GET" http script set $request_method_auth Here's the gdb session output: 322 sr->header_only = 1; /* <--- Look, it's your old friend! */ (gdb) 324 ctx->subrequest = sr; (gdb) 326 ngx_http_set_ctx(r, ctx, ngx_http_auth_request_module); (gdb) 328 return NGX_AGAIN; (gdb) print &r->request_line $1 = (ngx_str_t *) 0x48b06d88 <--- Take a good look. (gdb) print r->request_line $2 = {len = 26, data = 0x48b67000 "GET /private/test HTTP/1.1\r\nUser-Agent"} ^^^^^^^^^^- Take a good look at this, too. (gdb) print &sr->request_line $3 = (ngx_str_t *) 0x48b96874 <--- Does that look like 0x48b06d88 to you? (gdb) print sr->request_line $4 = {len = 27, data = 0x48b96c64 "HEAD /private/test HTTP/1.1"} ^^^^^^^^^^- Does that look like 0x48b67000 to you? (gdb) print sizeof("GET /private/test HTTP/1.1")-1 $5 = 26 (gdb) print sizeof("HEAD /private/test HTTP/1.1")-1 $6 = 27 (gdb) print r->method $7 = 2 (gdb) print r->method_name $8 = {len = 3, data = 0x48b67000 "GET /private/test HTTP/1.1\r\nUser-Agent"} (gdb) print sr->method $9 = 4 (gdb) print sr->method_name $10 = {len = 4, data = 0x81733f2 "HEAD "} In your previous reply to my post you wrote: > If you handle auth subrequests with proxy_pass, you may use > proxy_set_method to issue HEAD requests to backend. Or you may > use correct auth endpoint which doesn't return unneeded data. And in your latest reply you wrote: > Again: the sr->header_only workaround is anyway required, as > static file, or memcached, or fastcgi may be used as handler for > auth subrequest (and, actually, even some broken http backends may > return data to HEAD, not even talking about intended changes of > proxy_method). Without sr->header_only explicitly set you > will get response content before headers of the real response: First you argued that one should use "correct" authentication endpoints that do not return unneeded data, and now you're trying to make it seem like your workaround is there to allow the auth_request module to work with "incorrect" authentication endpoints that violate HTTP itself and other established protocols, but the main reason your workaround is there has to do with the fact that you don't know how to handle the request body without crashing nginx. That is also why you need to redesign your module if you want it to work with caching. > And, BTW, as far as I recall your code, it won't set > sr->header_only in case of HEAD requests. This is wrong, you > still need it even for HEADs. > > > I left your workaround the way it is to prevent people from shooting > > themselves in the foot by setting the proxy method to GET, but those > > who know about the proxy_method directive (BTW, you got the name wrong, > > there is no proxy_set_method directive), should know what they are doing. You're wrong, AGAIN. I left the workaround the way it is - do I hear an echo? Make a claim. Have your claim proven wrong by the next paragraph you quote. Keep the wrong claim and the quoted paragraph that proves you wrong in your reply anyway. Priceless. > Your patch tries to make the workaround a bit more smart, and > tries to make arbitrary configuration more efficient, but this is > wrong aproach: instead, it should be made less intrusive. > > The major problem with the workaround as of now is that it > prevents caching. And *this* should be addressed. My patch solves the GET initiated flood problem nicely without breaking anything, but it does not address the broken design of your module that's preventing caching in the first place. So instead of trying to prove me wrong and getting yourself proven wrong again and again (which you might find personally intrusive and embarrassing), maybe you could address the broken design of your module? > On the other hand, setting method/request line increase comlexity > and overhead in normal case, as well as subject to bugs (at least > two were identified above). Are you serious? You're comparing the complexity and overhead of completely in-memory operations (memory allocation, assignment, request line scanning and two memcpy() calls) on a few dozen bytes to reading from disk, processing, buffering and sending hundreds of kilobytes on a closed connection? Just to put things in perspective I did some profiling with nanosecond resolution and it turns out that on the oldest single-core CPU server I have access to, my patch adds 2550 ns (on average) to the overall processing time. Using the memchr() function instead of the ngx_strlchr() function to scan for the space character and scanning only up to the 8th byte (because currently the longest HTTP method request name is 7 characters long) gives roughly the same results (2600 ns on average), while using an optimized for loop directly shaves off another 200 ns, on average. Even without any micro-optimization, my patch adds less than 3 microseconds (0.003 ms) to the overall processing time on a very old server that you're unlikely to find in any production environment. 3000 nanoseconds. Horrible, isn't it? As for bugs, please do feel free to point them out. > Hope my position is clear enough It's clear that you keep making unsubstantiated claims that keep getting proven wrong, and that you'd rather spend 10 minutes writing a post to try to make it look like I broke your beloved module with my patch instead of taking 5 minutes to apply my patch, recompile nginx, and step through the patched function in a debugger to see what my patch really does, because you obviously do not understand what is going on in there. You seem to prefer conjecture to verifiable facts, so I see no point in discussing this further. Max From nginx-forum at nginx.us Sun Feb 19 02:25:51 2012 From: nginx-forum at nginx.us (mfouwaaz) Date: Sat, 18 Feb 2012 21:25:51 -0500 (EST) Subject: php exits with 502 Bad Gateway Message-ID: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> Hello I have a problem where php exits with a 502 Bad Gateway error every once in a while. It appears to be unpredictable. When it happens I have to re-start the vm -- restarting nginx doesn't fix it. The OS is Ubuntu running on VirtualBox and I am connecting to it over a bridged network only from the host Vista machine. This is a portion of the error log and also the configuration. Any help will be appreciated. 2012/02/18 18:12:31 [debug] 723#0: accept on 0.0.0.0:443, ready: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 accept: 192.168.1.69 fd:12 2012/02/18 18:12:31 [debug] 723#0: *1031 event timer add: 12: 60000:2472717397 2012/02/18 18:12:31 [debug] 723#0: *1031 epoll add event: fd:12 op:1 ev:80000001 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_do_handshake: -1 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_get_error: 2 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL handshake handler: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_do_handshake: 1 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL: TLSv1, cipher: "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1" 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_read: -1 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_get_error: 2 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_read: 1 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_read: 346 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_read: -1 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_get_error: 2 2012/02/18 18:12:31 [debug] 723#0: *1031 event timer del: 12: 2472717397 2012/02/18 18:12:31 [debug] 723#0: *1031 epoll add event: fd:12 op:3 ev:80000005 2012/02/18 18:12:31 [debug] 723#0: *1031 socket 13 2012/02/18 18:12:31 [debug] 723#0: *1031 epoll add connection: fd:13 ev:80000005 2012/02/18 18:12:31 [debug] 723#0: *1031 connect to 127.0.0.1:9000, fd:13 #1032 2012/02/18 18:12:31 [debug] 723#0: *1031 event timer add: 13: 60000:2472717424 2012/02/18 18:12:31 [error] 723#0: *1031 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.69, server: 192.168.1.68, request: "GET /pcode/register.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.68" 2012/02/18 18:12:31 [debug] 723#0: *1031 event timer del: 13: 2472717424 2012/02/18 18:12:31 [debug] 723#0: *1031 write new buf t:1 f:0 099429A4, pos 099429A4, size: 157 file: 0, size: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 write old buf t:1 f:0 099429A4, pos 099429A4, size: 157 file: 0, size: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 write new buf t:0 f:0 00000000, pos 080E7600, size: 120 file: 0, size: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 write new buf t:0 f:0 00000000, pos 080E6460, size: 53 file: 0, size: 0 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL buf copy: 157 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL buf copy: 120 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL buf copy: 53 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL to write: 330 2012/02/18 18:12:31 [debug] 723#0: *1031 SSL_write: 330 2012/02/18 18:12:31 [debug] 723#0: *1031 event timer add: 12: 75000:2472732425 ... and nginx.conf user root; worker_processes 4; events { } http { index index.php; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80; location ~* /pcode/(register|loginout).php { rewrite ^ https://$host$uri permanent; } include /etc/nginx/server_params; } server { listen 443 ssl; ssl on; ssl_certificate /usr/local/nginx/conf/server.crt; ssl_certificate_key /usr/local/nginx/conf/server.key; include /etc/nginx/server_params; } } ...the include file server_params being called: server_name 192.168.1.68; root /usr/share/nginx/www; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug_event; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_param HTTPS on; } location @rewrites{ rewrite ^ /index.php last; } #catch static file requests location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } #prevent hidden file requests --starting with a period location ~ /\. { access_log off; log_not_found off; deny all; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222625,222625#msg-222625 From mdounin at mdounin.ru Sun Feb 19 03:07:57 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 07:07:57 +0400 Subject: Making http_auth_request_module a first-class citizen? [patch] In-Reply-To: References: <20120217111904.GD67687@mdounin.ru> Message-ID: <20120219030756.GQ67687@mdounin.ru> Hello! On Sun, Feb 19, 2012 at 05:12:17AM +0400, Max wrote: > > 17 ??????? 2012, 15:19 ?? Maxim Dounin <mdounin at mdounin.ru>: > > On Fri, Feb 17, 2012 at 12:28:14PM +0400, Max wrote: > > > Maxim, you haven't even taken a look at my patch, have you? Because > > > if you had, you wouldn't have made such unsubstantiated claims. > > > > I have, despite the fact that it was provided as a link only. > > I usually send in patches on postcards, but I had run out of postcards. > I'm glad this Internet thing seems to work, although your replies seem > to indicate your download request had header_only set to 1. :-P Posting patches in the message itself makes review much easier. And you may also want to fix your email client to don't use html escaping in text mesages. > > I stand corrected. Your patch broke only $request variable, not > > the $request_method (which always comes from the main request). > > You're wrong, my patch did not break anything. Go ahead and put > any variables in those /private/ and /auth location blocks and > see for yourself. I know it must be difficult to decipher, but > could you try to guess what the following part of my patch does? It looks like you don't understand how variables work. Try the following without your patch and with it: log_format test "request: $request"; access_log /path/to/log test; location / { auth_request /auth; } location = /auth { proxy_pass http://some_auth_backend; proxy_set_header X-Original-Request $request; } [...] > location /private/ { > auth_request /auth; > set $request_private $request; This will calculate $request and cache it forever. As this will happen in main request context - everything will be good, i.e. original request line will be used everywhere. On the other hand, if $request calculation will happen in auth subrequest with your patch - the modified request line will be cached for the rest of request processing. [...] > > On the other hand, setting method/request line increase comlexity > > and overhead in normal case, as well as subject to bugs (at least > > two were identified above). > > Are you serious? You're comparing the complexity and overhead of > completely in-memory operations (memory allocation, assignment, > request line scanning and two memcpy() calls) on a few dozen > bytes to reading from disk, processing, buffering and sending > hundreds of kilobytes on a closed connection? It's sad you don't understand what I wrote. In the normal case there are *no* overhead for extra bytes sent, but there *is* overhead for extra processing. But it doesn't really matter though, it's complexity (and bugs) which really matters. And, just to make things more clear, complexity is about code complexity which makes maintanance, debugging and further modifications harder. [...] Maxim Dounin From nginx-forum at nginx.us Sun Feb 19 06:17:11 2012 From: nginx-forum at nginx.us (dapicester) Date: Sun, 19 Feb 2012 01:17:11 -0500 (EST) Subject: Dynamic reverse proxy configuration In-Reply-To: References: Message-ID: Thanks for the suggestion, I'll take a look. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222519,222629#msg-222629 From nginx-forum at nginx.us Sun Feb 19 08:41:52 2012 From: nginx-forum at nginx.us (huxuan) Date: Sun, 19 Feb 2012 03:41:52 -0500 (EST) Subject: Problem on proxy_pass the feedburner via nginx Message-ID: Hi guys, this is my first post in this forum. I want to use nginx to self host my feedburner page so as to make it possible to anyone who can access my VPS. But something is wrong as it shows nothing rather than the same as my feedburner page. The problem seems to be that there is a relative path link in the page, one shown as follows: "" the file path should be "http://feeds.feedburner.com/~d/styles/rss2full.xsl" but the browser locate at "http://feeds.huxuan.org/~d/styles/rss2full.xsl" I have try to solve the problem by referring the wiki and google but didn't got it. Thx for any help or suggestion my configuration is: ========== server { listen 80; server_name feeds.huxuan.org; access_log /home/huxuan/.log/www/feeds.huxuan.org.access.log; error_log /home/huxuan/.log/www/feeds.huxuan.org.error.log; location / { proxy_pass http://feeds.feedburner.com; proxy_set_header Host feeds.feedburner.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; rewrite ^(.*)$ /huxuan$1 break; } } ========== Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222631,222631#msg-222631 From hyperstruct at gmail.com Sun Feb 19 10:02:34 2012 From: hyperstruct at gmail.com (Massimiliano Mirra) Date: Sun, 19 Feb 2012 11:02:34 +0100 Subject: Can proxy_cache gzip cached content? In-Reply-To: <20120218233105.GO67687@mdounin.ru> References: <20120218233105.GO67687@mdounin.ru> Message-ID: On Sun, Feb 19, 2012 at 12:31 AM, Maxim Dounin wrote: > Yes. The easier way is to just write > > gunzip on; > > It will gunzip responses for clients which don't support gzip (as per > Accept-Encoding and gzip_http_version/gzip_proxied/gzip_disabled, > i.e. the same checks as done for gzip and gzip_static). > Thanks Max, I had come to the conclusion that it was always decompressing content but now I see I had been "testing" with curl -H 'Content-Encoding: gzip'. No wonder... Cheers, Massimiliano -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at nginxuser.net Sun Feb 19 10:22:34 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 13:22:34 +0300 Subject: "error_page"& "return" bug? Message-ID: Hello, It appears that the error_page directive is ignored when a status code is returned in the server context. (running v1.0.12 Stable). Server { # listen etc ... error_page 503 /error_docs/custom503.html; return 503 location /error_docs { internal; alias /server/path/to/folder; } ... } Will always return the Nginx default 503 page. Same applies to all status codes. When the status code is returned within a location block, the custom page is shown as expected ... Server { # listen etc ... error_page 503 /error_docs/custom503.html; location / { return 503; } location /error_docs { internal; alias /server/path/to/folder; } ... } From nginx-forum at nginx.us Sun Feb 19 12:02:35 2012 From: nginx-forum at nginx.us (rooxy) Date: Sun, 19 Feb 2012 07:02:35 -0500 (EST) Subject: htaccess conversion nginx Message-ID: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> hi nginx family, RewriteEngine On RewriteRule ^([0-9a-zA-Z]{1,6})$ links/?to=$1 [L] RewriteRule ^([0-9]{1,9})/banner/(.*)$ links/?uid=$1&adt=2&url=$2 [L] RewriteRule ^([0-9]{1,9})/(.*)$ links/?uid=$1&adt=1&url=$2 [L] help me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222634,222634#msg-222634 From mdounin at mdounin.ru Sun Feb 19 12:03:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 16:03:18 +0400 Subject: "error_page"& "return" bug? In-Reply-To: References: Message-ID: <20120219120317.GS67687@mdounin.ru> Hello! On Sun, Feb 19, 2012 at 01:22:34PM +0300, Nginx User wrote: > Hello, > > It appears that the error_page directive is ignored when a status code > is returned in the server context. (running v1.0.12 Stable). > > Server { > # listen etc > ... > > error_page 503 /error_docs/custom503.html; > return 503 > location /error_docs { > internal; > alias /server/path/to/folder; > } > > ... > } > > Will always return the Nginx default 503 page. Same applies to all status codes. It's not ignored, but returning error unconditionally in the server context also prevents error_page processing from working, as the "return 503" is again executed after error_page internal redirect. That is, something like this happens: 1. Request comes for "/something". 2. Server rewrites generate 503. 3. Due to error_page set the request is internally redirected to /error_docs/custom503.html. 4. Server rewrites again generate 503. 5. As we've already did error_page redirection, nginx ignores error_page set and returns internal error page. > When the status code is returned within a location block, the custom > page is shown as expected ... > > Server { > # listen etc > ... > > error_page 503 /error_docs/custom503.html; > location / { > return 503; > } > location /error_docs { > internal; > alias /server/path/to/folder; > } > > ... > } This doesn't "return 503" for error_page processing, and hence it works ok. Another possible aproach is to use named location in error_page. It won't re-execute server rewrites (that is, rewrite module directives, including the "return" directive, specified at server level) and will work as well. Maxim Dounin From edho at myconan.net Sun Feb 19 12:08:34 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 19:08:34 +0700 Subject: htaccess conversion nginx In-Reply-To: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello On Sun, Feb 19, 2012 at 7:02 PM, rooxy wrote: > hi nginx family, > > RewriteEngine On > RewriteRule ^([0-9a-zA-Z]{1,6})$ links/?to=$1 [L] > RewriteRule ^([0-9]{1,9})/banner/(.*)$ links/?uid=$1&adt=2&url=$2 [L] > RewriteRule ^([0-9]{1,9})/(.*)$ links/?uid=$1&adt=1&url=$2 [L] > > help me. > Looks simple enough. Try this one: rewrite ^/([0-9a-zA-Z]{1,6})$ /links/?to=$1 last; rewrite ^/([0-9]{1,9})/banner/(.*)$ /links/?uid=$1&adt=2&url=$2 last; rewrite ^/([0-9]{1,9})/(.*)$ /links/?uid=$1&adt=1&url=$2 last; -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Sun Feb 19 12:18:58 2012 From: nginx-forum at nginx.us (rooxy) Date: Sun, 19 Feb 2012 07:18:58 -0500 (EST) Subject: htaccess conversion nginx In-Reply-To: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4d47a7ce2260822c457ee179dc307ce5.NginxMailingListEnglish@forum.nginx.org> Thank you for your help, nginx: [emerg] directive "rewrite" is not terminated by ";" in /usr/local/nginx/ this get the error. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222634,222637#msg-222637 From edho at myconan.net Sun Feb 19 12:22:58 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 19:22:58 +0700 Subject: htaccess conversion nginx In-Reply-To: <4d47a7ce2260822c457ee179dc307ce5.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> <4d47a7ce2260822c457ee179dc307ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sun, Feb 19, 2012 at 7:18 PM, rooxy wrote: > Thank you for your help, > > nginx: [emerg] directive "rewrite" is not terminated by ";" in > /usr/local/nginx/ > > this get the error. > I suggest using brain. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 12:23:58 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 15:23:58 +0300 Subject: "error_page"& "return" bug? In-Reply-To: <20120219120317.GS67687@mdounin.ru> References: <20120219120317.GS67687@mdounin.ru> Message-ID: On 19 February 2012 15:03, Maxim Dounin wrote: > Hello! > It's not ignored, but returning error unconditionally in the > server context also prevents error_page processing from > working, as the "return 503" is again executed after error_page > internal redirect. ?That is, something like this happens: > > 1. Request comes for "/something". > 2. Server rewrites generate 503. > 3. Due to error_page set the request is internally redirected to > ? /error_docs/custom503.html. > 4. Server rewrites again generate 503. > 5. As we've already did error_page redirection, nginx ignores > ? error_page set and returns internal error page. This then is the "bug". It SHOULD now return "/error_docs/custom503.html" with a 503 status code. I.E., work as it does when the error is issued within a location context since once a user has set error_page, they would most likely expect a matching status return to deliver what was set. I assume there must be some technical road block but you guys are supposed to be geniuses :) Can't wait for the solution!! From nginx at nginxuser.net Sun Feb 19 12:25:30 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 15:25:30 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> Message-ID: On 19 February 2012 15:23, Nginx User wrote: > Can't wait for the solution!! I'll try the named location suggestion first of course From nginx-forum at nginx.us Sun Feb 19 12:29:52 2012 From: nginx-forum at nginx.us (rooxy) Date: Sun, 19 Feb 2012 07:29:52 -0500 (EST) Subject: htaccess conversion nginx In-Reply-To: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <903cc9652012983a774f4210b1ef2cb0.NginxMailingListEnglish@forum.nginx.org> Thank you for the suggestion, {1,6} here does not accept. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222634,222641#msg-222641 From edho at myconan.net Sun Feb 19 12:34:02 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 19:34:02 +0700 Subject: htaccess conversion nginx In-Reply-To: <903cc9652012983a774f4210b1ef2cb0.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> <903cc9652012983a774f4210b1ef2cb0.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sun, Feb 19, 2012 at 7:29 PM, rooxy wrote: > Thank you for the suggestion, > > {1,6} here does not accept. > Yeah, I missed something. The answer is here http://wiki.nginx.org/HttpRewriteModule -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 12:45:15 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 15:45:15 +0300 Subject: "error_page"& "return" bug? In-Reply-To: <20120219120317.GS67687@mdounin.ru> References: <20120219120317.GS67687@mdounin.ru> Message-ID: On 19 February 2012 15:03, Maxim Dounin wrote: > Another possible aproach is to use named location in error_page. > It won't re-execute server rewrites (that is, rewrite module > directives, including the "return" directive, specified at server > level) and will work as well. Unfortunately, this doesn't appear to work as expected either Server { # listen etc ... error_page 503 = @sitedown; return 503 location @sitedown { root /server/path/to/folder; # Don't you wish try_files could accept a single parameter? try_files $uri /custom503.html; } ... } From mdounin at mdounin.ru Sun Feb 19 12:55:54 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 16:55:54 +0400 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> Message-ID: <20120219125553.GU67687@mdounin.ru> Hello! On Sun, Feb 19, 2012 at 03:23:58PM +0300, Nginx User wrote: > On 19 February 2012 15:03, Maxim Dounin wrote: > > Hello! > > It's not ignored, but returning error unconditionally in the > > server context also prevents error_page processing from > > working, as the "return 503" is again executed after error_page > > internal redirect. ?That is, something like this happens: > > > > 1. Request comes for "/something". > > 2. Server rewrites generate 503. > > 3. Due to error_page set the request is internally redirected to > > ? /error_docs/custom503.html. > > 4. Server rewrites again generate 503. > > 5. As we've already did error_page redirection, nginx ignores > > ? error_page set and returns internal error page. > > This then is the "bug". > It SHOULD now return "/error_docs/custom503.html" with a 503 status code. > I.E., work as it does when the error is issued within a location > context since once a user has set error_page, they would most likely > expect a matching status return to deliver what was set. This is not a bug. You instructed nginx to generate 503 on each request which hits server rewrite phase, and it does what you said. Maxim Dounin From nginx at nginxuser.net Sun Feb 19 12:59:36 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 15:59:36 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> Message-ID: On 19 February 2012 15:45, Nginx User wrote: > On 19 February 2012 15:03, Maxim Dounin wrote: >> Another possible aproach is to use named location in error_page. >> It won't re-execute server rewrites (that is, rewrite module >> directives, including the "return" directive, specified at server >> level) and will work as well. > > Unfortunately, this doesn't appear to work as expected either > > Server { > ? ? ? # listen etc > ? ? ? ... > > ? ? ? error_page 503 = @sitedown; > ? ? ? return 503 > ? ? ? location @sitedown { > ? ? ? ? ? ? ? root /server/path/to/folder; > ? ? ? ? ? ? ? # Don't you wish try_files could accept a single parameter? > ? ? ? ? ? ? ? try_files $uri /custom503.html; > ? ? ? } > > ? ? ? ... > } PS. In any case, it would not be an ideal solution even if it did work as it would return a "200" status code with the custom file. So getting things to work as with the location option is probably best. In the interim, I have managed to get it to work by first redirecting to a location and issuing the 503 status code there Server { error_page 503 /error_docs/custom503.html; rewrite ^ /sitedown/ redirect; location /sitedown { return 503 } location /error_docs { internal; alias /server/path/to/error_docs/folder; } } From mdounin at mdounin.ru Sun Feb 19 13:04:13 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Feb 2012 17:04:13 +0400 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> Message-ID: <20120219130413.GV67687@mdounin.ru> Hello! On Sun, Feb 19, 2012 at 03:45:15PM +0300, Nginx User wrote: > On 19 February 2012 15:03, Maxim Dounin wrote: > > Another possible aproach is to use named location in error_page. > > It won't re-execute server rewrites (that is, rewrite module > > directives, including the "return" directive, specified at server > > level) and will work as well. > > Unfortunately, this doesn't appear to work as expected either > > Server { > # listen etc > ... > > error_page 503 = @sitedown; > return 503 > location @sitedown { > root /server/path/to/folder; > # Don't you wish try_files could accept a single parameter? > try_files $uri /custom503.html; Unless you have $uri file, this will do an internal redirect to /custom503.html, triggering the same 503 again. You have to process request in the location in question to make things work. Maxim Dounin From nginx at nginxuser.net Sun Feb 19 13:06:34 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 16:06:34 +0300 Subject: "error_page"& "return" bug? In-Reply-To: <20120219130413.GV67687@mdounin.ru> References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 16:04, Maxim Dounin wrote: > Hello! > > On Sun, Feb 19, 2012 at 03:45:15PM +0300, Nginx User wrote: > >> On 19 February 2012 15:03, Maxim Dounin wrote: >> > Another possible aproach is to use named location in error_page. >> > It won't re-execute server rewrites (that is, rewrite module >> > directives, including the "return" directive, specified at server >> > level) and will work as well. >> >> Unfortunately, this doesn't appear to work as expected either >> >> Server { >> ? ? ? ?# listen etc >> ? ? ? ?... >> >> ? ? ? ?error_page 503 = @sitedown; >> ? ? ? ?return 503 >> ? ? ? ?location @sitedown { >> ? ? ? ? ? ? ? ?root /server/path/to/folder; >> ? ? ? ? ? ? ? ?# Don't you wish try_files could accept a single parameter? >> ? ? ? ? ? ? ? ?try_files $uri /custom503.html; > > Unless you have $uri file, this will do an internal redirect to > /custom503.html, triggering the same 503 again. ?You have to > process request in the location in question to make things work. Thanks. I got a working config where I first redirect to a location and issue the 503 status there. Now, don't you really, really wish try_files could accept a single parameter?? :) Thanks for your help. From edho at myconan.net Sun Feb 19 13:55:14 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 20:55:14 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Sun, Feb 19, 2012 at 8:06 PM, Nginx User wrote: > > Thanks. > > I got a working config where I first redirect to a location and issue > the 503 status there. > What's the problem with placing return 503 in location / { } block? -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 14:01:36 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 17:01:36 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 16:55, Edho Arief wrote: > On Sun, Feb 19, 2012 at 8:06 PM, Nginx User wrote: >> >> Thanks. >> >> I got a working config where I first redirect to a location and issue >> the 503 status there. >> > > What's the problem with placing return 503 in location / { } block? None whatsoever. Did anyone say there was one? From edho at myconan.net Sun Feb 19 14:46:35 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 21:46:35 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Sun, Feb 19, 2012 at 9:01 PM, Nginx User wrote: >> What's the problem with placing return 503 in location / { } block? > > None whatsoever. Did anyone say there was one? > Just wondering. Also I couldn't make the method with rewriting work. I got 302'd ad infinitum. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 15:06:54 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 18:06:54 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 17:46, Edho Arief wrote: > On Sun, Feb 19, 2012 at 9:01 PM, Nginx User wrote: >>> What's the problem with placing return 503 in location / { } block? >> >> None whatsoever. Did anyone say there was one? >> > > Just wondering. Also I couldn't make the method with rewriting work. I > got 302'd ad infinitum. That'll be because none of the code I posted is actually real. They were just to help get an answer. This is what actually works: Server { error_page 503 /error_docs/custom503.html; if ( $request_uri !~ \.(jpg|gif|png|css|js)$ ) { set $tt "T"; } if ( $request_uri !~ ^/maintenance/$ ) { set $tt "${tt}T"; } if ( $tt = TT ) { rewrite ^ /maintenance/ redirect; } location /maintenance { internal; return 503; } location /error_docs { internal; alias /server/path/to/503_status/folder; } } From nginx at nginxuser.net Sun Feb 19 15:09:16 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 18:09:16 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 18:06, Nginx User wrote: > This is what actually works: Small amendment: Server { error_page 503 /error_docs/custom503.html; if ( $request_uri !~ \.(gif|png|css|js)$ ) { set $m "T"; } if ( $request_uri !~ ^/maintenance/$ ) { set $maint "${maint}T"; } if ( $maint = TT ) { rewrite ^ /maintenance/ redirect; } location /maintenance { internal; return 503; } location /error_docs { internal; alias /server/path/to/error_docs/folder; } } From nginx at nginxuser.net Sun Feb 19 15:12:14 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 18:12:14 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: Sorry ... Server { error_page 503 /error_docs/custom503.html; if ( $request_uri !~ \.(jpg|gif|png|css|js)$ ) { set $tt "T"; } if ( $request_uri !~ ^/maintenance/$ ) { set $tt "${maint}T"; } if ( $tt = TT ) { rewrite ^ /maintenance/ redirect; } location /maintenance { internal; return 503; } location /error_docs { internal; alias /server/path/to/error_docs/folder; } } From nginx at nginxuser.net Sun Feb 19 15:20:33 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 18:20:33 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: Arrggghhh! Server { error_page 503 /error_docs/custom503.html; if ( $request_uri !~ \.(jpg|gif|png|css|js)$ ) { set $tt "T"; } if ( $request_uri !~ ^/maintenance/$ ) { set $tt "${tt}T"; } if ( $tt = TT ) { rewrite ^ /maintenance/ redirect; } location /maintenance { internal; return 503; } location /error_docs { internal; alias /server/path/to/error_docs/folder; } } Finally I promise :) From edho at myconan.net Sun Feb 19 15:23:33 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 22:23:33 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Sun, Feb 19, 2012 at 10:20 PM, Nginx User wrote: > Arrggghhh! > > > Server { > ? ? ? ?error_page 503 /error_docs/custom503.html; > ? ? ? ?if ( $request_uri !~ \.(jpg|gif|png|css|js)$ ) { > ? ? ? ? ? ? ? ?set $tt ?"T"; > ? ? ? ?} > ? ? ? ?if ( $request_uri !~ ^/maintenance/$ ) { > ? ? ? ?set $tt ?"${tt}T"; > ? ? ? ?} > ? ? ? ?if ( $tt = TT ) { > ? ? ? ? ? ? ? ?rewrite ^ /maintenance/ redirect; > ? ? ? ?} > ? ? ? ?location /maintenance { > ? ? ? ? ? ? ? ?internal; > ? ? ? ? ? ? ? ?return 503; > ? ? ? ?} > ? ? ? ?location /error_docs { > ? ? ? ? ? ? ? ?internal; > ? ? ? ? ? ? ? ?alias /server/path/to/error_docs/folder; > ? ? ? ?} > } > > Finally I promise :) > ...so, the method above is better than using location / { } block? I'm confused now. ############# server { ?error_page 503 /error_docs/custom503.html; location / { return 503; ...normal configs... } location /error_docs/ { internal; alias /server/path/to/error_docs/folder; } } ############# -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From edho at myconan.net Sun Feb 19 15:54:53 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 22:54:53 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: Nevermind, I just got this awesome idea: if ($uri !~ ^/error_page/) { return 503; } (seems to be working on my simple test) On Sun, Feb 19, 2012 at 10:23 PM, Edho Arief wrote: > ...so, the method above is better than using location / { } block? I'm > confused now. > > > ############# > server { > ??error_page 503 /error_docs/custom503.html; > ?location / { > ? ?return 503; > ? ?...normal configs... > ?} > ?location /error_docs/ { > ? ?internal; > ? ?alias /server/path/to/error_docs/folder; > ?} > } > ############# > -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From caldcv at gmail.com Sun Feb 19 16:20:09 2012 From: caldcv at gmail.com (Chris) Date: Sun, 19 Feb 2012 11:20:09 -0500 Subject: php exits with 502 Bad Gateway In-Reply-To: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> Message-ID: What is the contents of /etc/nginx/fastcgi_params From caldcv at gmail.com Sun Feb 19 16:21:20 2012 From: caldcv at gmail.com (Chris) Date: Sun, 19 Feb 2012 11:21:20 -0500 Subject: php exits with 502 Bad Gateway In-Reply-To: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also the header (first 20 - 30 lines or so) of /etc/init.d/php-cgi or whatever PHP startup script you are using From nginx at nginxuser.net Sun Feb 19 16:37:02 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 19:37:02 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 18:54, Edho Arief wrote: > Nevermind, I just got this awesome idea: if ($uri !~ ^/error_page/) { > return 503; } > > (seems to be working on my simple test) > > On Sun, Feb 19, 2012 at 10:23 PM, Edho Arief wrote: >> ...so, the method above is better than using location / { } block? I'm >> confused now. >> >> >> ############# >> server { >> ??error_page 503 /error_docs/custom503.html; >> ?location / { >> ? ?return 503; >> ? ?...normal configs... >> ?} >> ?location /error_docs/ { >> ? ?internal; >> ? ?alias /server/path/to/error_docs/folder; >> ?} >> } >> ############# Your method will work if you only a few location blocks active to which you can easily add the return directive. The one I suggested works in my case where I have several location blocks. I have just put the relevant bits into a file and can include it when I need to take the domain down. It also allows for the inclusion of images, css. js and similar resources. With these things, there is no single answer. From edho at myconan.net Sun Feb 19 16:44:15 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 19 Feb 2012 23:44:15 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Sun, Feb 19, 2012 at 11:37 PM, Nginx User wrote: >>> >>> ############# >>> server { >>> ??error_page 503 /error_docs/custom503.html; >>> ?location / { >>> ? ?return 503; >>> ? ?...normal configs... >>> ?} >>> ?location /error_docs/ { >>> ? ?internal; >>> ? ?alias /server/path/to/error_docs/folder; >>> ?} >>> } >>> ############# > > Your method will work if you only a few location blocks active to > which you can easily add the return directive. > You can nest all your location blocks in single location / { } block (and you can put location / { } in location / { }). Just add "location / {" before start of other locations (except error page) and "}" after them. Also have you tried this one? Instead of plain "return 503;" in server block: server { if ($uri !~ ^/error_page/) { return 503; } ... } Because even if "return 503" is capable of returning the page you want, the css/js/image still won't work. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 17:19:16 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 20:19:16 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 19:44, Edho Arief wrote: > Also have you tried this one? Instead of plain "return 503;" in server block: No, I haven't and don't plan to at this time because I have a setup that works fine as posted earlier with images, css etc all loading as required. As said, there isn't one single answer as the Nginx config allows a lot of flexibility and if you prefer a different setup, that's just fine. All the best! From edho at myconan.net Sun Feb 19 17:34:02 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 20 Feb 2012 00:34:02 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Mon, Feb 20, 2012 at 12:19 AM, Nginx User wrote: > On 19 February 2012 19:44, Edho Arief wrote: >> Also have you tried this one? Instead of plain "return 503;" in server block: > No, I haven't and don't plan to at this time because I have a setup > that works fine as posted earlier with images, css etc all loading as > required. > For completeness, the version without if (single page - all requests returning 503, least overhead etc - put other static resources on different domain/server): server { # listen etc ... error_page 503 @503; return 503; location @503 { root /; # Immediately serves 503 page, not $uri. # The fallback should never happen. try_files /file/system/path/to/503.html =500; } ... } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 17:56:14 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 20:56:14 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 20:34, Edho Arief wrote: > For completeness, the version without if (single page - all requests > returning 503, least overhead etc - put other static resources on > different domain/server): > > server { > ? ? ? # listen etc > ? ? ? ... > ? ? ? error_page 503 @503; > ? ? ? return 503; > ? ? ? location @503 { > ? ? ? ? ? ? ? root /; > ? ? ? ? ? ? ? # Immediately serves 503 page, not $uri. > ? ? ? ? ? ? ? # The fallback should never happen. > ? ? ? ? ? ? ? try_files /file/system/path/to/503.html =500; > ? ? ? } > ? ? ? ... > } I don't think involving another domain/server to serve a single page is a particularly effective approach. As mentioned, this serves the page as required along with the needed resources. error_page 503 /error_docs/custom503.html; if ( $request_uri !~ \.(jpg|gif|png|css|js)$ ) { set $tt "T"; } if ( $request_uri !~ ^/maintenance/$ ) { set $tt "${tt}T"; } if ( $tt = TT ) { rewrite ^ /maintenance/ redirect; } location /maintenance { internal; return 503; } location /error_docs { internal; alias /server/path/to/error_docs/folder; } I have added it to a file 503.default which I just include (uncomment) in my normal server block Server { include /server/path/to/503.default; ... } Nice and easy and all contained in one domain. Visitors are redirected to a "example.com/maintenance/" url which helps in passing info. Works for me but as said, not the only possibility. From edho at myconan.net Sun Feb 19 18:36:41 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 20 Feb 2012 01:36:41 +0700 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On Mon, Feb 20, 2012 at 12:56 AM, Nginx User wrote: > > I don't think involving another domain/server to serve a single page > is a particularly effective approach. > Many larger sites use CDN for static contents. And the original config you wanted (if works) also do exactly that - returning 503 on all requests. > As mentioned, this serves the page as required along with the needed resources. > Three ifs, two roundtrips on client. Looks much more inefficient to me ;) > > I have added it to a file 503.default which I just include (uncomment) > in my normal server block > Or one if line: if ($uri ~ ^/error_docs/) { return 503; } No need to comment "error_page ..." and "location ^~ /error_docs/" on normal operation. > > Works for me but as said, not the only possibility. > Pretty sure will return 404 when accessing dynamic resources[1]. Hopefully you don't have any or care about the impact. [1] Images/resources served with rewrites, dynamic javascript/css, etc. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx at nginxuser.net Sun Feb 19 18:53:27 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 21:53:27 +0300 Subject: "error_page"& "return" bug? In-Reply-To: References: <20120219120317.GS67687@mdounin.ru> <20120219130413.GV67687@mdounin.ru> Message-ID: On 19 February 2012 21:36, Edho Arief wrote: > Pretty sure will return 404 when accessing dynamic resources[1]. > Hopefully you don't have any or care about the impact. No ... because there is none as my maintenance page is a static page. Only thing it uses is javascript which runs on client's browser . Anyway, not sure why this is dragging on. I am perfectly happy with my setup. I don't want to use a cdn for this page nor do I need to run a cgi script or similar on it. If you happen to use or prefer another setup, that's fine with me as well. Goodbye and thanks for your time. From nginx-forum at nginx.us Sun Feb 19 19:05:02 2012 From: nginx-forum at nginx.us (mfouwaaz) Date: Sun, 19 Feb 2012 14:05:02 -0500 (EST) Subject: php exits with 502 Bad Gateway In-Reply-To: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> Sorry for missing that conf. portion. ....the fastcgi_params file fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; #fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; ... and the entire php-fcgi file #!/bin/bash BIND=127.0.0.1:9000 USER=www-data PHP_FCGI_CHILDREN=15 PHP_FCGI_MAX_REQUESTS=1000 PHP_CGI=/usr/bin/php-cgi PHP_CGI_NAME=`basename $PHP_CGI` PHP_CGI_ARGS="- USR=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND" RETVAL=0 PHP_CONFIG_FILE=/etc/php5/cgi/php.ini start() { echo -n "Starting PHP FastCGI: " start-stop-daemon --quiet -- start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS RETVAL=$? echo "$PHP_CGI_NAME." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $USER $PHP_CGI RETVAL=$? echo "$PHP_CGI_NAME." } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL Thanks for your help! Fouwaaz Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222625,222675#msg-222675 From caldcv at gmail.com Sun Feb 19 19:10:07 2012 From: caldcv at gmail.com (Chris) Date: Sun, 19 Feb 2012 14:10:07 -0500 Subject: php exits with 502 Bad Gateway In-Reply-To: <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Are these distribution packages (.deb / .rpm) or compiled from source? From nginx-forum at nginx.us Sun Feb 19 19:18:52 2012 From: nginx-forum at nginx.us (mfouwaaz) Date: Sun, 19 Feb 2012 14:18:52 -0500 (EST) Subject: php exits with 502 Bad Gateway In-Reply-To: References: Message-ID: <7395ad66eb860eb95f603181d80e9016.NginxMailingListEnglish@forum.nginx.org> Hi I am not very conversant with this. I didn't do any compilation of my own just downloaded the .deb package. The nginx.conf, though, I created new. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222625,222678#msg-222678 From nginx-forum at nginx.us Sun Feb 19 19:23:32 2012 From: nginx-forum at nginx.us (rj) Date: Sun, 19 Feb 2012 14:23:32 -0500 (EST) Subject: nginx default server not used Message-ID: <0a783deefc9845860cf7848b1f22b9a5.NginxMailingListEnglish@forum.nginx.org> Hi, I am currently evaluating nginx as an apache replacement and came across some (at least for me) odd behavior and hope someone can explain to me what is happening. nginx is ignoring my default server and instead picks a directory of a deleted server as the default. If I point some random domain at the server address for which no server configuration exists, nginx servers the directory of the removed server config instead the on defined as the default server. OS: Debian Squeeze nginx version: nginx/1.0.12 dotdeb repository http://pastebin.com/JGVA7NnZ nginx config: http://pastebin.com/UFEuCPEG The directory served by nginx as the default is not mentioned anywhere in /etc/nginx. I added it previously as an additional server but have deleted it. Somehow nginx still remembers the server and servers it as the default for unknown server names. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222679,222679#msg-222679 From caldcv at gmail.com Sun Feb 19 19:47:15 2012 From: caldcv at gmail.com (Chris) Date: Sun, 19 Feb 2012 14:47:15 -0500 Subject: php exits with 502 Bad Gateway In-Reply-To: <7395ad66eb860eb95f603181d80e9016.NginxMailingListEnglish@forum.nginx.org> References: <7395ad66eb860eb95f603181d80e9016.NginxMailingListEnglish@forum.nginx.org> Message-ID: I've had this problem in the past and I replaced /etc/init.d/php-cgi with one that enables FastCGI and that stopped the problems. I think it's Ubuntu specific because I cannot recreate the problem (on a virtual server) on Debian. From nginx-forum at nginx.us Sun Feb 19 20:37:44 2012 From: nginx-forum at nginx.us (mfouwaaz) Date: Sun, 19 Feb 2012 15:37:44 -0500 (EST) Subject: php exits with 502 Bad Gateway In-Reply-To: <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there Thanks for the reply and I am trying to do that. But how? Do I download php again and re-install it? Where can I find the replacement file? If you can walk me through the process, it will be greatly appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222625,222684#msg-222684 From nginx-forum at nginx.us Sun Feb 19 20:43:53 2012 From: nginx-forum at nginx.us (rooxy) Date: Sun, 19 Feb 2012 15:43:53 -0500 (EST) Subject: htaccess conversion nginx In-Reply-To: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> Message-ID: What should I do ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222634,222685#msg-222685 From nginx at nginxuser.net Sun Feb 19 20:45:35 2012 From: nginx at nginxuser.net (Nginx User) Date: Sun, 19 Feb 2012 23:45:35 +0300 Subject: htaccess conversion nginx In-Reply-To: <4d47a7ce2260822c457ee179dc307ce5.NginxMailingListEnglish@forum.nginx.org> References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> <4d47a7ce2260822c457ee179dc307ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 19 February 2012 15:18, rooxy wrote: > nginx: [emerg] directive "rewrite" is not terminated by ";" in > /usr/local/nginx/ Whenever '{' and '}' appear in a rewrite, the rewrite must be enclosed in quotation marks. So change the rewrites to ... rewrite '^/([0-9a-zA-Z]{1,6})$' /links/?to=$1 last; rewrite '^/([0-9]{1,9})/banner/(.*)$' /links/?uid=$1&adt=2&url=$2 last; rewrite '^/([0-9]{1,9})/(.*)$' /links/?uid=$1&adt=1&url=$2 last; From mdounin at mdounin.ru Sun Feb 19 21:03:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Feb 2012 01:03:00 +0400 Subject: nginx default server not used In-Reply-To: <0a783deefc9845860cf7848b1f22b9a5.NginxMailingListEnglish@forum.nginx.org> References: <0a783deefc9845860cf7848b1f22b9a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120219210300.GW67687@mdounin.ru> Hello! On Sun, Feb 19, 2012 at 02:23:32PM -0500, rj wrote: > Hi, > > I am currently evaluating nginx as an apache replacement and came across > some (at least for me) odd behavior and hope someone can explain to me > what is happening. > > nginx is ignoring my default server and instead picks a directory of a > deleted server as the default. If I point some random domain at the > server address for which no server configuration exists, nginx servers > the directory of the removed server config instead the on defined as the > default server. > > OS: Debian Squeeze > nginx version: nginx/1.0.12 dotdeb repository > http://pastebin.com/JGVA7NnZ > nginx config: http://pastebin.com/UFEuCPEG > > The directory served by nginx as the default is not mentioned anywhere > in /etc/nginx. I added it previously as an additional server but have > deleted it. Somehow nginx still remembers the server and servers it as > the default for unknown server names. Most likely reason is that you've forgot to reload nginx config, or reload failed for some reason (take a look at nginx error log). The config you've posted suggests there is indeed syntax error in it (missing "}" at the end), though it may be unrelated problem introduced during posting. Running "nginx -t" will help to identify syntax errors, looking into error log will cover other possible cases like port conflicts and so on. Maxim Dounin From nginx-forum at nginx.us Sun Feb 19 21:15:51 2012 From: nginx-forum at nginx.us (rooxy) Date: Sun, 19 Feb 2012 16:15:51 -0500 (EST) Subject: htaccess conversion nginx In-Reply-To: References: Message-ID: <7e66b5cb41632b19384ef64c0a809ada.NginxMailingListEnglish@forum.nginx.org> Nginx User Wrote: ------------------------------------------------------- > On 19 February 2012 15:18, rooxy > wrote: > > nginx: [emerg] directive "rewrite" is not > terminated by ";" in > > /usr/local/nginx/ > > Whenever '{' and '}' appear in a rewrite, the > rewrite must be enclosed > in quotation marks. > > So change the rewrites to ... > > rewrite '^/([0-9a-zA-Z]{1,6})$' /links/?to=$1 > last; > rewrite '^/([0-9]{1,9})/banner/(.*)$' > /links/?uid=$1&adt=2&url=$2 last; > rewrite '^/([0-9]{1,9})/(.*)$' > /links/?uid=$1&adt=1&url=$2 last; > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx did not :( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222634,222688#msg-222688 From nginx-forum at nginx.us Sun Feb 19 21:17:28 2012 From: nginx-forum at nginx.us (rj) Date: Sun, 19 Feb 2012 16:17:28 -0500 (EST) Subject: nginx default server not used In-Reply-To: <20120219210300.GW67687@mdounin.ru> References: <20120219210300.GW67687@mdounin.ru> Message-ID: <08f03db492b8121e538cb7d31ba3230d.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thank's for the quick reply. The config seems to be ok. The missing } is most likely a copy/paste error when combining the different config files for pastebin. I tried nginx -t already and it succeeds, so I assume the config is at least syntactical correct. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful I also restarted and reloaded the config several times via /etc/init.d/nginx {restart, reload, stop and start} without any success. Already checked the error log, unfortunately it is empty. The server is running and working otherwise as expected. It serves static and dynamic (php/ruby) content and also correctly passes requests to a java tomcat server without any problems. I have no clue what could trigger this odd behavior. It seems all server configurations work correctly except for the default config. Cheers, rj Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222679,222689#msg-222689 From nginx at nginxuser.net Sun Feb 19 21:54:11 2012 From: nginx at nginxuser.net (Nginx User) Date: Mon, 20 Feb 2012 00:54:11 +0300 Subject: htaccess conversion nginx In-Reply-To: <7e66b5cb41632b19384ef64c0a809ada.NginxMailingListEnglish@forum.nginx.org> References: <7e66b5cb41632b19384ef64c0a809ada.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 20 February 2012 00:15, rooxy wrote: > did not :( Did not what? Did you reload Nginx? What error message did you get if any? You need to give information about your issue. From nginx at nginxuser.net Sun Feb 19 22:12:44 2012 From: nginx at nginxuser.net (Nginx User) Date: Mon, 20 Feb 2012 01:12:44 +0300 Subject: nginx default server not used In-Reply-To: <08f03db492b8121e538cb7d31ba3230d.NginxMailingListEnglish@forum.nginx.org> References: <20120219210300.GW67687@mdounin.ru> <08f03db492b8121e538cb7d31ba3230d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 20 February 2012 00:17, rj wrote: > Hi Maxim, > > thank's for the quick reply. The config seems to be ok. The missing } is > most likely a copy/paste error when combining the different config files > for pastebin. > > I tried nginx -t already and it succeeds, so I assume the config is at > least syntactical correct. > > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > > I also restarted and reloaded the config several times via > /etc/init.d/nginx {restart, reload, stop and start} without any > success. > Already checked the error log, unfortunately it is empty. > The server is running and working otherwise as expected. It serves > static and dynamic (php/ruby) content and also correctly passes requests > to a java tomcat server without any problems. I have no clue what could > trigger this odd behavior. > It seems all server configurations work correctly except for the default > config. > > Cheers, > rj As Maxim said, if you previously had a location and then removed it but you are still seeing vestiges of this, then it means one of the following: 1. You are still running the previous instance of Nginx. 2. You are still loading a conf file with the previous items in it. There is no other option. Since you say you have reloaded (assuming you don't have two instances of nginx going), then it has to be #2. So you have to look again at your setup carefully. Somewhere in there, you are loading something different from what you have been editing. From appa at perusio.net Sun Feb 19 23:34:56 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 19 Feb 2012 23:34:56 +0000 Subject: htaccess conversion nginx In-Reply-To: References: <33e748e5861958e4d985595c0b6a58de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87wr7il7mn.wl%appa@perusio.net> On 19 Fev 2012 20h43 WET, nginx-forum at nginx.us wrote: > What should I do ? Try: location ~ "^/([0-9a-zA-Z]{1,6})$" { return 302 /links/?to=$1; } location ~ "^/([0-9]{1,9})/banner/(.*)$" { return 302 /links/?uid=$1&adt=2&url=$2; } location ~ "^/([0-9]{1,9})/(.*)$" { return 302 /links/?uid=$1&adt=1&url=$2; } --- appa From mp3geek at gmail.com Mon Feb 20 00:34:24 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Mon, 20 Feb 2012 13:34:24 +1300 Subject: LZ4 + nginx Message-ID: Just a feature request, Would be nice to have nginx support for LZ4 (like gzip static support), to have an alternative compression method built in.. http://code.google.com/p/lz4/ From edho at myconan.net Mon Feb 20 01:16:29 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 20 Feb 2012 08:16:29 +0700 Subject: nginx default server not used In-Reply-To: <08f03db492b8121e538cb7d31ba3230d.NginxMailingListEnglish@forum.nginx.org> References: <20120219210300.GW67687@mdounin.ru> <08f03db492b8121e538cb7d31ba3230d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello On Mon, Feb 20, 2012 at 4:17 AM, rj wrote: > I also restarted and reloaded the config several times via > /etc/init.d/nginx {restart, reload, stop and start} without any > success. > Already checked the error log, unfortunately it is empty. > The server is running and working otherwise as expected. It serves > static and dynamic (php/ruby) content and also correctly passes requests > to a java tomcat server without any problems. I have no clue what could > trigger this odd behavior. > It seems all server configurations work correctly except for the default > config. > Have you tried to clear your browser's cache? -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From edho at myconan.net Mon Feb 20 01:31:41 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 20 Feb 2012 08:31:41 +0700 Subject: php exits with 502 Bad Gateway In-Reply-To: References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> <86eeb937abf810d14e5d0021eb89da9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello On Mon, Feb 20, 2012 at 3:37 AM, mfouwaaz wrote: > Thanks for the reply and I am trying to do that. ?But how? ?Do I > download php again and re-install it? ?Where can I find the replacement > file? ?If you can walk me through the process, it will be greatly > appreciated. > Seems like your php-cgi got closed at one point which is why restarting nginx didn't help. Try running php-cgi with php-fpm/supervisord/god/monit. There are lots of tutorial on web for it. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Mon Feb 20 06:33:05 2012 From: nginx-forum at nginx.us (rj) Date: Mon, 20 Feb 2012 01:33:05 -0500 (EST) Subject: nginx default server not used In-Reply-To: References: Message-ID: <360c8585ee2f0f65828f15c36df6d0b5.NginxMailingListEnglish@forum.nginx.org> Edho Arief Wrote: ------------------------------------------------------- > Hello > > On Mon, Feb 20, 2012 at 4:17 AM, rj > wrote: > > I also restarted and reloaded the config several > times via > > /etc/init.d/nginx {restart, reload, stop and > start} without any > > success. > > Already checked the error log, unfortunately it > is empty. > > The server is running and working otherwise as > expected. It serves > > static and dynamic (php/ruby) content and also > correctly passes requests > > to a java tomcat server without any problems. I > have no clue what could > > trigger this odd behavior. > > It seems all server configurations work > correctly except for the default > > config. > > > > Have you tried to clear your browser's cache? > > -- > O< ascii ribbon campaign - stop html mail - > www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yeah that was the first thing I had in mind but it wasn't the reason. Somehow nginx didn't stop nor reload the config and as it didn't give me any warning I assumed it did what I ask. My fault for not properly checking if it actually stopped or not. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222679,222700#msg-222700 From nginx-forum at nginx.us Mon Feb 20 06:41:46 2012 From: nginx-forum at nginx.us (rj) Date: Mon, 20 Feb 2012 01:41:46 -0500 (EST) Subject: nginx default server not used In-Reply-To: References: Message-ID: Nginx User Wrote: ------------------------------------------------------- > On 20 February 2012 00:17, rj > wrote: > > Hi Maxim, > > > > thank's for the quick reply. The config seems to > be ok. The missing } is > > most likely a copy/paste error when combining > the different config files > > for pastebin. > > > > I tried nginx -t already and it succeeds, so I > assume the config is at > > least syntactical correct. > > > > nginx: the configuration file > /etc/nginx/nginx.conf syntax is ok > > nginx: configuration file /etc/nginx/nginx.conf > test is successful > > > > I also restarted and reloaded the config several > times via > > /etc/init.d/nginx {restart, reload, stop and > start} without any > > success. > > Already checked the error log, unfortunately it > is empty. > > The server is running and working otherwise as > expected. It serves > > static and dynamic (php/ruby) content and also > correctly passes requests > > to a java tomcat server without any problems. I > have no clue what could > > trigger this odd behavior. > > It seems all server configurations work > correctly except for the default > > config. > > > > Cheers, > > rj > > As Maxim said, if you previously had a location > and then removed it > but you are still seeing vestiges of this, then it > means one of the > following: > > 1. You are still running the previous instance of > Nginx. > 2. You are still loading a conf file with the > previous items in it. > > There is no other option. > > Since you say you have reloaded (assuming you > don't have two instances > of nginx going), then it has to be #2. > > So you have to look again at your setup carefully. > Somewhere in there, > you are loading something different from what you > have been editing. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thanks, this was indeed the problem. I assumed that /etc/init.d/nginx {reload, restart, stop, start} worked, as I didn't get any error message saying otherwise. I also couldn't find anything in nginx or the syslogs regarding nginx, so didn't thought this might be the problem. After reading your suggestion I had a look at the running processes after "shutting down" nginx, and it was still running. So it seems this was the problem, although I don't know why. I couldn't find anything in the logs why nginx was unable to start/stop or reload the config. The only difference after killing and restarting nginx, that I could find, was that passenger now is spawned as a separate process and not as a child process of nginx. So maybe that was the reason, but I'm just guessing. Thanks for the hint in the right direction. I'm probably to used to apaches warnings when it cannot restart. Next time I'll check this first. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222679,222701#msg-222701 From emakyol at gmail.com Mon Feb 20 07:28:26 2012 From: emakyol at gmail.com (Engin Akyol) Date: Mon, 20 Feb 2012 01:28:26 -0600 Subject: Mapping a User Agent to an IP address: Message-ID: Hey guys, I'm running NGINX as a front for my apache servers and I'm having issues where mappings within the config on occasion aren't being set. For instance with the following config: func $my_ip_match{ default 0; 1.1.1.1/32 1; 1.1.1.2/32 1; 1.1.1.3/32 1; 1.1.1.4/32 1; } #map the matches map $name $match_value { default 0; ~*name_1 $my_ip_match; ~*name_2 $my_ip_match; ~*name_3 $my_ip_match; ~*name_4 $my_ip_match; } $my_ip_match will be set if the given IP/subnet is matched, but the subsequent mapping is randomly executed. I haven't been able to come up with any reason why it ocassionally doesn't map. Is there any reason for this? Thanks in advance! /Engin From ne at vbart.ru Mon Feb 20 09:26:41 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 20 Feb 2012 13:26:41 +0400 Subject: LZ4 + nginx In-Reply-To: References: Message-ID: <201202201326.42065.ne@vbart.ru> On Monday 20 February 2012 04:34:24 Ryan Brown wrote: > Just a feature request, > > Would be nice to have nginx support for LZ4 (like gzip static > support), to have an alternative compression method built in.. > > http://code.google.com/p/lz4/ > Are there any browser that supports it? wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Feb 20 12:28:52 2012 From: nginx-forum at nginx.us (rishabh) Date: Mon, 20 Feb 2012 07:28:52 -0500 (EST) Subject: nginx ignores access_log directive when post_action specifie In-Reply-To: <20120210073121.GU67687@mdounin.ru> References: <20120210073121.GU67687@mdounin.ru> Message-ID: Hi, I am trying to log into two files. one default and one custom via post_action. http { access_log /var/log/nginx/access.log; server { location @postactionlocation { set_by_lua_file $logdata /var/www/log.lua; access_log /var/log/nginx/access2.log '$logdata'; return 444; } location / { #someproxypass here } post_action @postactionlocation; } In this case only access2.log(via post_action) is written and not the default access.log(in http) What would be an optimal solution. Thanks -- Rishabh Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Fri, Feb 10, 2012 at 01:29:26AM -0500, rishabh > wrote: > > > I am facing the same problem, is there any > update on this issue ? > > Logging happens in a location where request > completes, and with > post_action it's the location where post_action > processed. > So the problem looks like configuration one. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,92464,222714#msg-222714 From mdounin at mdounin.ru Mon Feb 20 15:18:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Feb 2012 19:18:32 +0400 Subject: nginx ignores access_log directive when post_action specifie In-Reply-To: References: <20120210073121.GU67687@mdounin.ru> Message-ID: <20120220151831.GD67687@mdounin.ru> Hello! On Mon, Feb 20, 2012 at 07:28:52AM -0500, rishabh wrote: > Hi, > > I am trying to log into two files. one default and one custom via > post_action. > > http { > access_log /var/log/nginx/access.log; > > server { > location @postactionlocation { > set_by_lua_file $logdata /var/www/log.lua; > access_log /var/log/nginx/access2.log '$logdata'; > return 444; > } > > location / { > #someproxypass here > } > > post_action @postactionlocation; > } > > > In this case only access2.log(via post_action) is written and not the > default access.log(in http) > > What would be an optimal solution. If you want request to be logged into two logs, you have to define two access_log directives where requests are logged, i.e. location / { access_log /var/log/nginx/access.log; access_log /var/log/nginx/access2.log; ... } Maxim Dounin From nginx-forum at nginx.us Mon Feb 20 17:59:38 2012 From: nginx-forum at nginx.us (anonymous_coward) Date: Mon, 20 Feb 2012 12:59:38 -0500 (EST) Subject: nginx reverse proxy proxies subset of requests slowly Message-ID: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> (This is cross-post from Server Fault: http://serverfault.com/questions/361742/nginx-reverse-proxy-proxies-subset-of-requests-slowly) We're running an nginx reverse proxy in front of a couple of IIS 7.5 web servers. I'm benchmarking a particular page using Apache Bench. The page is fully cached in memory in IIS (using ASP.NET outputcache). No caching is configured for nginx. We've noted a discrepancy in the benchmark results between runs straight up against one IIS server (no reverse proxying) and with an nginx reverse proxy in between. With the proxy in place, for non-trivial loads, a subset of requests take very long to complete. Without the proxy in place, all requests are completed in reasonably good time. I've benchmarked using nginx running on machines both large and small with the same result. I'm including Apache Bench output below, the first listing was generated in a run straight against IIS, the second listing was run with nginx in place. The nginx error-log shows nothing untoward. My question is whether anyone has clues as to what part of nginx or the nginx-IIS interaction might cause this phenomenon or just ideas as to where we should start looking for clues or whether it might just be a benchmarking artifact. **IIS** user at host:~$ ab -n 5000 -c 1000 [IIS-Host] This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking [IP] (be patient) Completed 500 requests Completed 1000 requests Completed 1500 requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000 requests Completed 4500 requests Completed 5000 requests Finished 5000 requests Server Software: Microsoft-IIS/7.5 Server Hostname: [IP] Server Port: [port] Document Path: [PATH] Document Length: 37840 bytes Concurrency Level: 1000 Time taken for tests: 12.592 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 190905000 bytes HTML transferred: 189200000 bytes Requests per second: 397.08 [#/sec] (mean) Time per request: 2518.385 [ms] (mean) Time per request: 2.518 [ms] (mean, across all concurrent requests) Transfer rate: 14805.57 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 12 17.9 3 62 Processing: 37 1944 1351.6 1548 7429 Waiting: 18 1522 751.9 1531 6248 Total: 68 1956 1343.9 1551 7432 Percentage of the requests served within a certain time (ms) 50% 1551 66% 1577 75% 1600 80% 1839 90% 4001 95% 5682 98% 6178 99% 6377 100% 7432 (longest request) user at host:~$ **nginx** user at host:~$ ab -n 5000 -c 1000 [NGINX-HOST] This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking [HOST] (be patient) Completed 500 requests Completed 1000 requests Completed 1500 requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000 requests Completed 4500 requests Completed 5000 requests Finished 5000 requests Server Software: nginx/1.0.11 Server Hostname: [HOST] Server Port: 80 Document Path: [PATH] Document Length: 37840 bytes Concurrency Level: 1000 Time taken for tests: 46.770 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Total transferred: 190490000 bytes HTML transferred: 189200000 bytes Requests per second: 106.91 [#/sec] (mean) Time per request: 9353.928 [ms] (mean) Time per request: 9.354 [ms] (mean, across all concurrent requests) Transfer rate: 3977.48 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 445 1130.0 6 7072 Processing: 14 3088 5987.3 825 43550 Waiting: 9 2541 6125.0 199 43541 Total: 18 3532 6064.6 1168 43554 Percentage of the requests served within a certain time (ms) 50% 1168 66% 2333 75% 3306 80% 3590 90% 14597 95% 19448 98% 23996 99% 25970 100% 43554 (longest request) user at host:~$ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222724,222724#msg-222724 From mdounin at mdounin.ru Mon Feb 20 18:27:42 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Feb 2012 22:27:42 +0400 Subject: nginx reverse proxy proxies subset of requests slowly In-Reply-To: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> References: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120220182741.GF67687@mdounin.ru> Hello! On Mon, Feb 20, 2012 at 12:59:38PM -0500, anonymous_coward wrote: > (This is cross-post from Server Fault: > http://serverfault.com/questions/361742/nginx-reverse-proxy-proxies-subset-of-requests-slowly) > > We're running an nginx reverse proxy in front of a couple of IIS 7.5 web > servers. I'm benchmarking a particular page using Apache Bench. The page > is fully cached in memory in IIS (using ASP.NET outputcache). No caching > is configured for nginx. > > We've noted a discrepancy in the benchmark results between runs straight > up against one IIS server (no reverse proxying) and with an nginx > reverse proxy in between. With the proxy in place, for non-trivial > loads, a subset of requests take very long to complete. Without the > proxy in place, all requests are completed in reasonably good time. [...] > **nginx** > > user at host:~$ ab -n 5000 -c 1000 [NGINX-HOST] [...] > Concurrency Level: 1000 Under which OS you run nginx? Please note that 1000 is too high for nginx on Windows, see known issues list here: http://nginx.org/en/docs/windows.html#known_issues If running nginx under Windows, please also make sure you have worker_processes set to 1. Maxim Dounin From nginx-forum at nginx.us Mon Feb 20 19:28:54 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 14:28:54 -0500 (EST) Subject: Dynamic Subdomain Configuration Message-ID: Hello, We provide a subdomain for each user, for example: paul.ourdomain.com jay.ourdomain.com bob.ourdomain.com xxxxx.ourdomain.com Currently, I am doing this manually by adding another config in /etc/nginx/conf.d for each subdomain. A typical conf looks like: server { listen 80; server_name paul.ourdomain.com; root /srv/www/users/paul/wp; index index.php; access_log /var/log/nginx/vhosts/paul.access.log; error_log /var/log/nginx/vhosts/paul.error.log; include /etc/nginx/excludes.conf; include /etc/nginx/wordpress.conf; include /etc/nginx/expires.conf; } Is there I way I can abstract this, and prevent creating a configuration for each subdomain? I was reading something about map, but don't fully understand it. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222733#msg-222733 From nginx-forum at nginx.us Mon Feb 20 19:39:14 2012 From: nginx-forum at nginx.us (anonymous_coward) Date: Mon, 20 Feb 2012 14:39:14 -0500 (EST) Subject: nginx reverse proxy proxies subset of requests slowly In-Reply-To: <20120220182741.GF67687@mdounin.ru> References: <20120220182741.GF67687@mdounin.ru> Message-ID: <325662883097a72c0df4cd8e3780d71c.NginxMailingListEnglish@forum.nginx.org> Nginx is running on Linux (Ubuntu). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222724,222737#msg-222737 From edho at myconan.net Mon Feb 20 19:39:42 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 21 Feb 2012 02:39:42 +0700 Subject: Dynamic Subdomain Configuration In-Reply-To: References: Message-ID: On Tue, Feb 21, 2012 at 2:28 AM, justin wrote: > Hello, > > We provide a subdomain for each user, for example: > > ? ? paul.ourdomain.com > ? ? jay.ourdomain.com > ? ? bob.ourdomain.com > ? ? xxxxx.ourdomain.com > > Currently, I am doing this manually by adding another config in > /etc/nginx/conf.d for each subdomain. A typical conf looks like: > > server { > ?listen 80; > > ?server_name paul.ourdomain.com; > > ?root /srv/www/users/paul/wp; > > ?index index.php; > > ?access_log /var/log/nginx/vhosts/paul.access.log; > ?error_log /var/log/nginx/vhosts/paul.error.log; > > ?include /etc/nginx/excludes.conf; > ?include /etc/nginx/wordpress.conf; > ?include /etc/nginx/expires.conf; > } > > Is there I way I can abstract this, and prevent creating a configuration > for each subdomain? I was reading something about map, but don't fully > understand it. > Probably something like this: map $host $username { some.domain.com usera; another.one.net userb; } server { root /srv/www/users/$username/wp; access_log /var/log/nginx/vhosts/$username.access.log; ... } Or this (without map): server { server_name ~^(?.+)\.domain\.com$; root /srv/www/users/$username/wp; ... } -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Mon Feb 20 19:55:39 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 14:55:39 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: References: Message-ID: <5837ac7fb301104cda70ee8121b332c4.NginxMailingListEnglish@forum.nginx.org> Edho, Thank you very much for the assistance. Here is what I used, and seems to be working perfectly: server { listen 80; server_name ~^(?.+)\.mydomain\.com$; root /srv/www/users/$user/wp; index index.php; access_log /var/log/nginx/vhosts/$user.access.log; error_log /var/log/nginx/vhosts/$user.error.log; include /etc/nginx/excludes.conf; include /etc/nginx/wordpress.conf; include /etc/nginx/expires.conf; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222740#msg-222740 From mdounin at mdounin.ru Mon Feb 20 21:12:55 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Feb 2012 01:12:55 +0400 Subject: nginx reverse proxy proxies subset of requests slowly In-Reply-To: <325662883097a72c0df4cd8e3780d71c.NginxMailingListEnglish@forum.nginx.org> References: <20120220182741.GF67687@mdounin.ru> <325662883097a72c0df4cd8e3780d71c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120220211255.GG67687@mdounin.ru> Hello! On Mon, Feb 20, 2012 at 02:39:14PM -0500, anonymous_coward wrote: > Nginx is running on Linux (Ubuntu). Then you may want to check network problems (packet loss?) and other related things like local ports exhaustion (local port range small and no tw_reuse/tw_recycle activated?) and firewall state table overflows. Maxim Dounin From mdounin at mdounin.ru Mon Feb 20 21:15:11 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Feb 2012 01:15:11 +0400 Subject: Dynamic Subdomain Configuration In-Reply-To: <5837ac7fb301104cda70ee8121b332c4.NginxMailingListEnglish@forum.nginx.org> References: <5837ac7fb301104cda70ee8121b332c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120220211511.GH67687@mdounin.ru> Hello! On Mon, Feb 20, 2012 at 02:55:39PM -0500, justin wrote: > Edho, > > Thank you very much for the assistance. Here is what I used, and seems > to be working perfectly: [...] > access_log /var/log/nginx/vhosts/$user.access.log; > error_log /var/log/nginx/vhosts/$user.error.log; Note: the "error_log" directive doesn't support variables, and this will log all errors into "$user.error.log" file. Maxim Dounin From gt0057 at gmail.com Mon Feb 20 23:38:15 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Tue, 21 Feb 2012 00:38:15 +0100 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: <239288B8417143FE922B11C7222A1C88@Desktop> References: <20120217125414.GG22076@craic.sysops.org> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> <239288B8417143FE922B11C7222A1C88@Desktop> Message-ID: Hi Unfortunately the problem is partially solved. postgres_query "SELECT user FROM usertable "WHERE user=$user AND pwd=crypt($pass, pwd)"; The crypt function in postgresql works correctly only with the password created by the htpasswd program, but do not work with passwords created by PHP. Best regards, and many thanks. 2012/2/18 Piotr Sikora : > Hi, > > >> Hi reason, the password is not in MD5, but rather in DES (PHP --> >> crypt($verpas, CRYPT_STD_DES) >> What should I use instead of set_md5 ? >> DES on this page http://wiki.nginx.org/HttpSetMiscModule#Installation >> is never mentioned > > > I'm not aware of any module that would offer crypt() hashing for variables > in nginx.conf. > > On the bright side, PostgreSQL's crypt() [1] should help you. Could you > please try: > > ? postgres_query ? "SELECT user FROM usertable > ? ? ? ? ? ? ? ? ? ?"WHERE user=$user AND pwd=crypt($pass, pwd)"; > > [1] http://www.postgresql.org/docs/9.1/static/pgcrypto.html > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Feb 20 23:45:17 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 18:45:17 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: <20120220211511.GH67687@mdounin.ru> References: <20120220211511.GH67687@mdounin.ru> Message-ID: How do I handle the fallback, i.e. somebody types: foobar.mydomain.com Which does not exists, right now I am getting: no input file. Instead, would love to simply return a 404 error, or even better serve a custom static 404 error page. Maxim: Bummer that error_log does not support variables, anyway to do this dynamically? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222746#msg-222746 From nginx-forum at nginx.us Tue Feb 21 00:18:35 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 19:18:35 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: References: <20120220211511.GH67687@mdounin.ru> Message-ID: Actually, I think I found how to set the 404: location ~ \.php$ { if (!-f $document_root/$fastcgi_script_name) { log_not_found off; return 404; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_pass php1.local.pagelines.com:9000; } But I can't use log_not_found off, getting an error about the ability to use this in the location. Basically, I don't want an error logged if somebody types: foobar.mydomain.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222747#msg-222747 From edho at myconan.net Tue Feb 21 01:10:40 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 21 Feb 2012 08:10:40 +0700 Subject: Dynamic Subdomain Configuration In-Reply-To: References: <20120220211511.GH67687@mdounin.ru> Message-ID: On Tue, Feb 21, 2012 at 7:18 AM, justin wrote: > Actually, I think I found how to set the 404: > > location ~ \.php$ { > ?if (!-f $document_root/$fastcgi_script_name) { > ? ?log_not_found off; > ? ?return 404; > ?} > > ?include /etc/nginx/fastcgi_params; > ?fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > ?fastcgi_intercept_errors off; > ?fastcgi_pass php1.local.pagelines.com:9000; > } > > But I can't use log_not_found off, getting an error about the ability to > use this in the location. Basically, I don't want an error logged if > somebody types: > Try try_files $uri =404; -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Tue Feb 21 01:28:01 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 20:28:01 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: References: <20120220211511.GH67687@mdounin.ru> Message-ID: <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> This is for Wordpress, I already have a try_files. I want to log not found everywhere else, just not in the dynamic subdomain location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { if (!-f $document_root/$fastcgi_script_name) { log_not_found off; return 404; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_pass php1.local.pagelines.com:9000; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222750#msg-222750 From edho at myconan.net Tue Feb 21 01:32:33 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 21 Feb 2012 08:32:33 +0700 Subject: Dynamic Subdomain Configuration In-Reply-To: <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> References: <20120220211511.GH67687@mdounin.ru> <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Feb 21, 2012 at 8:28 AM, justin wrote: > This is for Wordpress, I already have a try_files. I want to log not > found everywhere else, just not in the dynamic subdomain > You can (should) have another try_files in location ~ \.php$ > location / { > ?try_files $uri $uri/ /index.php?$args; > } > > location ~ \.php$ { try_files $uri =404; > > ?include /etc/nginx/fastcgi_params; > ?fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > ?fastcgi_intercept_errors off; > ?fastcgi_pass php1.local.pagelines.com:9000; > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222750#msg-222750 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Tue Feb 21 01:40:25 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 20 Feb 2012 20:40:25 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> References: <20120220211511.GH67687@mdounin.ru> <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Edho, Still seeing logged: 2012/02/20 17:37:18 [error] 12840#0: *27 testing "/srv/www/users/boo/wp" existence failed (2: No such file or directory) while logging request With: location ~ \.php$ { try_files $uri =404; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222752#msg-222752 From edho at myconan.net Tue Feb 21 01:51:56 2012 From: edho at myconan.net (Edho Arief) Date: Tue, 21 Feb 2012 08:51:56 +0700 Subject: Dynamic Subdomain Configuration In-Reply-To: References: <20120220211511.GH67687@mdounin.ru> <4f3bd3c9fea2dd9f3a48e0749a27b0c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Tue, Feb 21, 2012 at 8:40 AM, justin wrote: > Edho, > > Still seeing logged: > > 2012/02/20 17:37:18 [error] 12840#0: *27 testing "/srv/www/users/boo/wp" > existence failed (2: No such file or directory) while logging request > The errors seems like coming from root directive. I don't know how to prevent logging that, sorry. Anyway, the try_files in location php block should allow you to use custom static 404 error page. -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From ysma at corp.netease.com Tue Feb 21 04:47:37 2012 From: ysma at corp.netease.com (=?gb2312?B?wu3Txcn6?=) Date: Tue, 21 Feb 2012 12:47:37 +0800 Subject: Is this a bug??? Message-ID: <201202211247376126280@corp.netease.com> hello,everyone: I have the following questions about add_header instruction.(version 1.1.8) This is my nginx.conf 43 add_header Server ok; 44 location / { 45 add_header location ok; 46 root html; 47 index index.html index.htm; then construct a request? [root at gitclubs conf]# curl -I http://localhost/ HTTP/1.1 200 OK Server: nginx/1.0.12 Date: Mon, 20 Feb 2012 08:13:02 GMT Content-Type: text/html Content-Length: 151 Last-Modified: Mon, 13 Feb 2012 02:12:22 GMT Connection: keep-alive location: ok Accept-Ranges: bytes [root at gitclubs conf]# There is only location header. In ngx_http_headers_merge_conf() (ngx_http_headers_filter_module.c) 440 if (conf->headers == NULL) { 441 conf->headers = prev->headers; 442 } If I config add_header in location and server, then prev->headers can't be merged to conf->headers. Is this intended or a bug? Thanks. ??15011367065 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 21 05:16:41 2012 From: nginx-forum at nginx.us (anonymous_coward) Date: Tue, 21 Feb 2012 00:16:41 -0500 (EST) Subject: nginx reverse proxy proxies subset of requests slowly In-Reply-To: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> References: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7fdf6a1dfc36522d83b446b0af75edc1.NginxMailingListEnglish@forum.nginx.org> Port range is net.ipv4.ip_local_port_range = 32768 61000, so I guess that's pretty standard and shouldn't be a problem for this benchmarks (which involves at most 500 requests). I've tw_reuse and tw_recycle, but that does not seem to be having much of an effect. In rough numbers, what's the expected per-core throughput of nginx in reverse proxy mode? (with or without gzip'ing) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222724,222758#msg-222758 From nginx-forum at nginx.us Tue Feb 21 06:23:35 2012 From: nginx-forum at nginx.us (anonymous_coward) Date: Tue, 21 Feb 2012 01:23:35 -0500 (EST) Subject: nginx reverse proxy proxies subset of requests slowly In-Reply-To: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> References: <88a15a91497944abed75335519ea86ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have now done exhaustive tests on a larger instance that yields higher throughput (>1000 #/s). The /proc/sys/net/ipv4/tcp_syncookies setting seems to have an effect on how much this subset of requests hang (it's worse with the setting enabled). I'm seeing "TCP: Possible SYN flooding on port 80. Dropping request." (or "Sending cookies") in kern.log. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222724,222760#msg-222760 From andrew at nginx.com Tue Feb 21 06:44:43 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 20 Feb 2012 22:44:43 -0800 Subject: alternative ssl Message-ID: Hi, NGINX dev team has been working with CyaSSL in regard to integrating this library as an alternative SSL engine: http://www.yassl.com/yaSSL/Blog/Entries/2012/2/20_CyaSSL_working_with_Nginx.html?utm_source=twitterfeed&utm_medium=twitter What would be opinions here in regards to what kind of an SSL is good to have with nginx? And what would be the reasons for that (short)? Many thanks! From nginx-forum at nginx.us Tue Feb 21 07:54:18 2012 From: nginx-forum at nginx.us (srk.) Date: Tue, 21 Feb 2012 02:54:18 -0500 (EST) Subject: Unable to configure nginx as reverse proxy In-Reply-To: <35cd403012874d2da036cc20b701b4f8.NginxMailingListEnglish@forum.nginx.org> References: <87d39dmp77.wl%appa@perusio.net> <35cd403012874d2da036cc20b701b4f8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can someone please share a working config for the above topology i have mentioned? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222558,222767#msg-222767 From nginx-forum at nginx.us Tue Feb 21 08:14:26 2012 From: nginx-forum at nginx.us (atrus) Date: Tue, 21 Feb 2012 03:14:26 -0500 (EST) Subject: Proxy_cache recursive update. Message-ID: Hi guys, I know that nginx proxy_cache is based on url (page). The cache file is something like this : ./c/a00b4dce27af1b06075339f598a4050c and it includes all the content of that page : text, image, banner, ... Suppose that I have a site map (on real server) like this : /page1.html /page2.html /page3.html /banner.jpeg all the three pages are include the banner.jpeg by tag References: <201202211247376126280@corp.netease.com> Message-ID: <20120221085733.GM67687@mdounin.ru> Hello! On Tue, Feb 21, 2012 at 12:47:37PM +0800, ??? wrote: > hello,everyone: > I have the following questions about add_header instruction.(version 1.1.8) > > This is my nginx.conf > > 43 add_header Server ok; > 44 location / { > 45 add_header location ok; > 46 root html; > 47 index index.html index.htm; > > then construct a request? > > [root at gitclubs conf]# curl -I http://localhost/ > HTTP/1.1 200 OK > Server: nginx/1.0.12 > Date: Mon, 20 Feb 2012 08:13:02 GMT > Content-Type: text/html > Content-Length: 151 > Last-Modified: Mon, 13 Feb 2012 02:12:22 GMT > Connection: keep-alive > location: ok > Accept-Ranges: bytes > [root at gitclubs conf]# > > There is only location header. > > In ngx_http_headers_merge_conf() (ngx_http_headers_filter_module.c) > 440 if (conf->headers == NULL) { > 441 conf->headers = prev->headers; > 442 } > > If I config add_header in location and server, then prev->headers can't be merged to conf->headers. > Is this intended or a bug? This is intended. Array-type directives (all of them, not only add_header) are inherited only if there are no corresponding directives defined at particular level. Maxim Dounin From nginx-forum at nginx.us Tue Feb 21 09:28:02 2012 From: nginx-forum at nginx.us (piotr.pawlowski) Date: Tue, 21 Feb 2012 04:28:02 -0500 (EST) Subject: Stub_status explenation needed Message-ID: Dear all, I am not quite sure how to understand stub_status module output. I have three servers, on which I have this module turned on. Maybe servers infrastructure is the key here, so let me explain it: * ProxyServer - server with NginX, where all requests are coming and proxied to other two servers * WebServer - server with NginX and PHP, where 90% of our application is running * DbServer - server with databases but also with NginX and PHP for rest of the application Now, for WebServer and DbServer, stub status looks as follows: DbServer: Active connections: 1 server accepts handled requests 1749046 1749046 1749046 Reading: 0 Writing: 1 Waiting: 0 WebServer: Active connections: 4 server accepts handled requests 142484042 142484042 142484042 Reading: 0 Writing: 4 Waiting: 0 As you can see, all values (accepts, handled, requests) are the same, which, in my opinion, is good (am I right?). Now below is output from ProxyServer: Active connections: 1225 server accepts handled requests 199105 199105 573654 Reading: 3 Writing: 5 Waiting: 1217 First thing, which worries me, is the fact, that values for 'accepted' and 'handled' are a lot of different that value of 'requests'. Does it mean, that less than 40% of requests are not handled by NginX? Second thing is, that amount of 'Active connections' and 'Waiting' is very high... Should I be worried about those values? Or everything is ok and my attitude to my server is hypersensitive? Thank you in advance for any tip or a help in understanding this issue. Regards Piotr Pawlowski Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222771,222771#msg-222771 From mp3geek at gmail.com Tue Feb 21 11:32:28 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Wed, 22 Feb 2012 00:32:28 +1300 Subject: LZ4 + nginx In-Reply-To: <201202201326.42065.ne@vbart.ru> References: <201202201326.42065.ne@vbart.ru> Message-ID: sorry, I assumed the decryption would've been done by nginx rather than the browser On Mon, Feb 20, 2012 at 10:26 PM, Valentin V. Bartenev wrote: > On Monday 20 February 2012 04:34:24 Ryan Brown wrote: >> Just a feature request, >> >> Would be nice to have nginx support for LZ4 (like gzip static >> support), to have an alternative compression method built in.. >> >> http://code.google.com/p/lz4/ >> > > Are there any browser that supports it? > > ?wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mp3geek at gmail.com Tue Feb 21 11:35:03 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Wed, 22 Feb 2012 00:35:03 +1300 Subject: using tmpfs Message-ID: Performance wise, just hosting static files using tmpfs rather than ext4.. would do alot? (cpu load/throughput etc) Is there any benefit in using a ramdisk as your wwwroot dir? From mp3geek at gmail.com Tue Feb 21 11:48:32 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Wed, 22 Feb 2012 00:48:32 +1300 Subject: alternative ssl In-Reply-To: References: Message-ID: Are there any (recent) benchmarks vs OpenSSL? or features not supported by CyaSSL but supported in OpenSSL? Its all about performance, cpu usage. I have lots of SSL connections coming through so any improvements with speed and decrease in CPU usage I'm for it. On Tue, Feb 21, 2012 at 7:44 PM, Andrew Alexeev wrote: > Hi, > > NGINX dev team has been working with CyaSSL in regard to integrating this library as an alternative SSL engine: > > http://www.yassl.com/yaSSL/Blog/Entries/2012/2/20_CyaSSL_working_with_Nginx.html?utm_source=twitterfeed&utm_medium=twitter > > What would be opinions here in regards to what kind of an SSL is good to have with nginx? And what would be the reasons for that (short)? > > Many thanks! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ft at falkotimme.com Tue Feb 21 12:32:23 2012 From: ft at falkotimme.com (Falko Timme) Date: Tue, 21 Feb 2012 13:32:23 +0100 Subject: using tmpfs References: Message-ID: <706DB2B926F54030B0FE654B089F15D9@notebook> It will reduce your disk I/O a lot. But I wouldn't place the whole wwwroot dir on tmpfs because everytime you reboot the contents of tmpfs is gone. Instead, if your web application allows for caching (like Drupal with the Boost module, WordPress with the W3 TotalCache or WP Super Cache, Typo3 with the nc_staticfilecache extension, etc.), I'd put the cache onto tmpfs. ----- Original Message ----- From: "Ryan Brown" To: Sent: Tuesday, February 21, 2012 12:35 PM Subject: using tmpfs > Performance wise, just hosting static files using tmpfs rather than > ext4.. would do alot? (cpu load/throughput etc) Is there any benefit > in using a ramdisk as your wwwroot dir? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Feb 21 12:59:47 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Feb 2012 16:59:47 +0400 Subject: Stub_status explenation needed In-Reply-To: References: Message-ID: <20120221125947.GO67687@mdounin.ru> Hello! On Tue, Feb 21, 2012 at 04:28:02AM -0500, piotr.pawlowski wrote: > Dear all, > > I am not quite sure how to understand stub_status module output. I have > three servers, on which I have this module turned on. > Maybe servers infrastructure is the key here, so let me explain it: > * ProxyServer - server with NginX, where all requests are coming and > proxied to other two servers > * WebServer - server with NginX and PHP, where 90% of our application is > running > * DbServer - server with databases but also with NginX and PHP for rest > of the application > > Now, for WebServer and DbServer, stub status looks as follows: > > DbServer: > Active connections: 1 > server accepts handled requests > 1749046 1749046 1749046 > Reading: 0 Writing: 1 Waiting: 0 > > WebServer: > Active connections: 4 > server accepts handled requests > 142484042 142484042 142484042 > Reading: 0 Writing: 4 Waiting: 0 > > As you can see, all values (accepts, handled, requests) are the same, > which, in my opinion, is good (am I right?). > > Now below is output from ProxyServer: > Active connections: 1225 > server accepts handled requests > 199105 199105 573654 > Reading: 3 Writing: 5 Waiting: 1217 > > First thing, which worries me, is the fact, that values for 'accepted' > and 'handled' are a lot of different that value of 'requests'. Does it > mean, that less than 40% of requests are not handled by NginX? The "requests" value represents number of http requests, which is expected to be different from "accepts" if keepalive connections are used. The "handled" value represents number of handled connections. It must be the same as "accepts" unless some resource limits were reached (e.g. worker_connections overflow). I.e. if your see "handled" less than "accepts" - it means you have problem and some client connections were dropped. > Second thing is, that amount of 'Active connections' and 'Waiting' is > very high... > Should I be worried about those values? Or everything is ok and my > attitude to my server is hypersensitive? The "waiting" represents number of keepalive connections. It's expected to be high if you have keepalive enabled. The "active connections" includes "waiting" so it's expected to be high too. Maxim Dounin From ne at vbart.ru Tue Feb 21 13:15:25 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 21 Feb 2012 17:15:25 +0400 Subject: LZ4 + nginx In-Reply-To: References: <201202201326.42065.ne@vbart.ru> Message-ID: <201202211715.25947.ne@vbart.ru> On Tuesday 21 February 2012 15:32:28 Ryan Brown wrote: > sorry, I assumed the decryption would've been done by nginx rather > than the browser > It seems that you suggest: read a lz4 file or stream -> decode lz4 -> encode gzip -> serve to browser instead of: read a gzip file or stream -> serve to browser ..so, where's the profit? wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Feb 21 14:13:11 2012 From: nginx-forum at nginx.us (atrus) Date: Tue, 21 Feb 2012 09:13:11 -0500 (EST) Subject: Proxy_cache recursive update. In-Reply-To: References: Message-ID: Guy ! Is there any one have fix this issue ?!!! Thanks so much. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222769,222784#msg-222784 From office at tecserver.com Tue Feb 21 14:27:51 2012 From: office at tecserver.com (Dipl.-Ing. Juergen Ladstaetter) Date: Tue, 21 Feb 2012 09:27:51 -0500 Subject: weird caching behaviour Message-ID: <02f801ccf0a5$083bd230$18b37690$@tecserver.com> Hi guys, we're running a load balanced cluster with nginx as load balancing software and use the caching feature. So far we're caching for 3 high frequent sites and it's working great. Now when I add another site to be cached (configuration is below) nginx starts to cache EVERY website that it's loadbalancing. Can you find an error in the configuration or tell me why it's doing this? Thanks in advance. Nginx.conf: user nginx nginx; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # standard logging log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; # cache logging log_format cache '$remote_addr - $remote_user [$time_local] - $http_referer - ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '"$request" ($status) ' '"$http_user_agent" '; access_log /var/log/nginx/cache.log cache; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 8k; request_pool_size 4k; client_max_body_size 100M; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; proxy_read_timeout 180s; ignore_invalid_headers on; index index.html; # buffering proxy off (fake speed improvement) proxy_buffering off; proxy_buffer_size 128k; proxy_buffers 4 256k; # definition of the load balancing nodes upstream backend { ip_hash; server SYS_SERVER1:80; server SYS_SERVER2:80; } # set a general temp path proxy_temp_path /tmp/cache/tmp; # include all vhosts include sites-enabled/*; } Default vhost without caching that catches all non-specific requests: # standard load balancer server { listen SYSserver:80; server_name _; # status location /nginx_status { stub_status on; access_log off; allow all; #deny all; } location / { proxy_pass http://backend; proxy_buffering off; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } A Vhost that has caching enabled and working: # definition about the cache proxy_cache_path /tmp/cache/siteA levels=1:2 keys_zone=siteA:10m max_size=1g inactive=1h; # listener server { listen SYSserver:80; server_name www.sitea.com; location / { proxy_pass http://backend; proxy_buffering on; proxy_cache siteA; proxy_cache_valid 200 10m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_key "$scheme$host$request_uri$cookie_user"; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } If we add the following vhost to be cached too, the whole system gets cached which shouldn't be: # definition about the cache proxy_cache_path /tmp/cache/felix levels=1:2 keys_zone=felix:10m max_size=1g inactive=1h; # felix listener server { listen CONserver:80; server_name www.siteb.com; location / { proxy_pass http://backend; proxy_buffering on; proxy_cache felix; proxy_cache_valid 200 10m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_key "$scheme$host$request_uri$cookie_user"; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } If you see anything that is wrong or could be configured better, please let me know. This weird caching faulty behaviour is confusing me since nginx won't tell me any error. Thanks in advance! Juergen From mdounin at mdounin.ru Tue Feb 21 14:37:33 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Feb 2012 18:37:33 +0400 Subject: weird caching behaviour In-Reply-To: <02f801ccf0a5$083bd230$18b37690$@tecserver.com> References: <02f801ccf0a5$083bd230$18b37690$@tecserver.com> Message-ID: <20120221143733.GR67687@mdounin.ru> Hello! On Tue, Feb 21, 2012 at 09:27:51AM -0500, Dipl.-Ing. Juergen Ladstaetter wrote: > Hi guys, > > we're running a load balanced cluster with nginx as load balancing software > and use the caching feature. So far we're caching for 3 high frequent sites > and it's working great. > Now when I add another site to be cached (configuration is below) nginx > starts to cache EVERY website that it's loadbalancing. Can you find an error > in the configuration or tell me why it's doing this? Thanks in advance. > > Nginx.conf: [...] > # include all vhosts > include sites-enabled/*; > } > > Default vhost without caching that catches all non-specific requests: > # standard load balancer > server { > listen SYSserver:80; > server_name _; This is *not* default vhost, as you don't have "default_server" option in listen directive, and order isn't guaranteed due to "include sites-enabled/*" used. Using listen SYSserver:80 default_server; should fix your problem. More details may be found here: http://nginx.org/en/docs/http/server_names.html Maxim Dounin From office at tecserver.com Tue Feb 21 14:59:32 2012 From: office at tecserver.com (Dipl.-Ing. Juergen Ladstaetter) Date: Tue, 21 Feb 2012 09:59:32 -0500 Subject: AW: weird caching behaviour In-Reply-To: <20120221143733.GR67687@mdounin.ru> References: <02f801ccf0a5$083bd230$18b37690$@tecserver.com> <20120221143733.GR67687@mdounin.ru> Message-ID: <030101ccf0a9$7518a370$5f49ea50$@tecserver.com> That's it. Thanks very much -----Urspr?ngliche Nachricht----- Von: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] Im Auftrag von Maxim Dounin Gesendet: Tuesday, February 21, 2012 9:38 AM An: nginx at nginx.org Betreff: Re: weird caching behaviour Hello! On Tue, Feb 21, 2012 at 09:27:51AM -0500, Dipl.-Ing. Juergen Ladstaetter wrote: > Hi guys, > > we're running a load balanced cluster with nginx as load balancing > software and use the caching feature. So far we're caching for 3 high > frequent sites and it's working great. > Now when I add another site to be cached (configuration is below) > nginx starts to cache EVERY website that it's loadbalancing. Can you > find an error in the configuration or tell me why it's doing this? Thanks in advance. > > Nginx.conf: [...] > # include all vhosts > include sites-enabled/*; > } > > Default vhost without caching that catches all non-specific requests: > # standard load balancer > server { > listen SYSserver:80; > server_name _; This is *not* default vhost, as you don't have "default_server" option in listen directive, and order isn't guaranteed due to "include sites-enabled/*" used. Using listen SYSserver:80 default_server; should fix your problem. More details may be found here: http://nginx.org/en/docs/http/server_names.html Maxim Dounin _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From andrew at nginx.com Tue Feb 21 15:22:37 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Tue, 21 Feb 2012 07:22:37 -0800 Subject: alternative ssl In-Reply-To: References: Message-ID: <5177F90F-BCBA-4269-BC56-096DB6FBA26A@nginx.com> Ryan, There's a preliminary work done so far, no benchmarks. CyaSSL is primarily about reducing memory per connection several times. -- On Feb 21, 2012, at 3:48, Ryan Brown wrote: > Are there any (recent) benchmarks vs OpenSSL? or features not > supported by CyaSSL but supported in OpenSSL? > > Its all about performance, cpu usage. I have lots of SSL connections > coming through so any improvements with speed and decrease in CPU > usage I'm for it. > > On Tue, Feb 21, 2012 at 7:44 PM, Andrew Alexeev wrote: >> Hi, >> >> NGINX dev team has been working with CyaSSL in regard to integrating this library as an alternative SSL engine: >> >> http://www.yassl.com/yaSSL/Blog/Entries/2012/2/20_CyaSSL_working_with_Nginx.html?utm_source=twitterfeed&utm_medium=twitter >> >> What would be opinions here in regards to what kind of an SSL is good to have with nginx? And what would be the reasons for that (short)? >> >> Many thanks! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From ne at vbart.ru Tue Feb 21 15:30:34 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 21 Feb 2012 19:30:34 +0400 Subject: Proxy_cache recursive update. In-Reply-To: References: Message-ID: <201202211930.34290.ne@vbart.ru> On Tuesday 21 February 2012 12:14:26 atrus wrote: > Hi guys, > > I know that nginx proxy_cache is based on url (page). The cache file is > something like this : > ./c/a00b4dce27af1b06075339f598a4050c > and it includes all the content of that page : text, image, banner, ... > > Suppose that I have a site map (on real server) like this : > /page1.html > /page2.html > /page3.html > /banner.jpeg > > all the three pages are include the banner.jpeg by tag > The problem is, when I update the banner.jpeg with a new one (same file > name) on the real server, and I've updated the cached file banner.jpeg > on cached server (remove that cache file), but when I surf the > page1.html on cache server, it's still include the old banner.jpeg :( Do you mean, browser still include the old banner.jpeg? So, that's browser cache, not nginx. wbr, Valentin V. Bartenev From ktm at rice.edu Tue Feb 21 16:02:29 2012 From: ktm at rice.edu (ktm at rice.edu) Date: Tue, 21 Feb 2012 10:02:29 -0600 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> <239288B8417143FE922B11C7222A1C88@Desktop> Message-ID: <20120221160229.GB21114@aart.rice.edu> On Tue, Feb 21, 2012 at 12:38:15AM +0100, Giuseppe Tofoni wrote: > Hi > Unfortunately the problem is partially solved. > > postgres_query "SELECT user FROM usertable "WHERE user=$user AND > pwd=crypt($pass, pwd)"; > > The crypt function in postgresql works correctly only with the > password created by the htpasswd program, but do not work with > passwords created by PHP. > > Best regards, and many thanks. > You need to determine what "crypt" is being used in your PHP: http://php.net/manual/en/function.crypt.php Once you have that information, you should be able to figure out what you will need to do. Cheers, Ken From brane.gracnar at tsmedia.si Tue Feb 21 16:32:42 2012 From: brane.gracnar at tsmedia.si (=?UTF-8?B?IkJyYW5lIEYuIEdyYcSNbmFyIg==?=) Date: Tue, 21 Feb 2012 17:32:42 +0100 Subject: alternative ssl In-Reply-To: References: Message-ID: <4F43C72A.2060501@tsmedia.si> On 02/21/2012 12:48 PM, Ryan Brown wrote: > Are there any (recent) benchmarks vs OpenSSL? or features not > supported by CyaSSL but supported in OpenSSL? > > Its all about performance, cpu usage. I have lots of SSL connections > coming through so any improvements with speed and decrease in CPU > usage I'm for it. Cyassl has cleaner implementation and api. Maintainers also claim lower memory footprint, which is considerable argument http://www.yassl.com/yaSSL/Products-cyassl.html http://en.wikipedia.org/wiki/Comparison_of_TLS_Implementations Brane From gt0057 at gmail.com Tue Feb 21 16:45:20 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Tue, 21 Feb 2012 17:45:20 +0100 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: <20120221160229.GB21114@aart.rice.edu> References: <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <7D43058623FE4D5F84887D5F68709CC0@Desktop> <239288B8417143FE922B11C7222A1C88@Desktop> <20120221160229.GB21114@aart.rice.edu> Message-ID: Hi, In PHP I used crypt($pass, CRYPT_STD_DES) and I tried with the following statement postgres_query "SELECT user FROM usertable WHERE user=$user AND pwd=crypt($pass, substr(pwd, 1, 2))"; but do not work, some ideas? Best regards Giuseppe 2012/2/21 ktm at rice.edu : > On Tue, Feb 21, 2012 at 12:38:15AM +0100, Giuseppe Tofoni wrote: >> Hi >> Unfortunately the problem is partially solved. >> >> postgres_query ? "SELECT user FROM usertable "WHERE user=$user AND >> pwd=crypt($pass, pwd)"; >> >> The crypt function in postgresql works correctly only with the >> password created by the htpasswd program, but do not work with >> passwords created by PHP. >> >> Best regards, and many thanks. >> > > You need to determine what "crypt" is being used in your PHP: > > http://php.net/manual/en/function.crypt.php > > Once you have that information, you should be able to figure out > what you will need to do. > > Cheers, > Ken > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ktm at rice.edu Tue Feb 21 17:02:22 2012 From: ktm at rice.edu (ktm at rice.edu) Date: Tue, 21 Feb 2012 11:02:22 -0600 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <7D43058623FE4D5F84887D5F68709CC0@Desktop> <239288B8417143FE922B11C7222A1C88@Desktop> <20120221160229.GB21114@aart.rice.edu> Message-ID: <20120221170222.GD21114@aart.rice.edu> On Tue, Feb 21, 2012 at 05:45:20PM +0100, Giuseppe Tofoni wrote: > Hi, > > In PHP I used crypt($pass, CRYPT_STD_DES) and I tried with the > following statement > > postgres_query "SELECT user FROM usertable WHERE user=$user AND > pwd=crypt($pass, substr(pwd, 1, 2))"; > > but do not work, some ideas? > > Best regards > > Giuseppe > Are the encrypted passwords the same? If they are, are you certain you are passing the correct password, i.e. stripping line ending correctly? Ken From nginx-forum at nginx.us Tue Feb 21 18:46:03 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 21 Feb 2012 13:46:03 -0500 (EST) Subject: Dynamic Subdomain Configuration In-Reply-To: References: Message-ID: <4fbe06693a796ee010f82733ac244636.NginxMailingListEnglish@forum.nginx.org> Still trying to track down what is causing the following to be logged in the error log: 2012/02/21 10:42:13 [error] 13884#0: *19 testing "/srv/www/users/foobar/wp" existence failed (2: No such file or directory) while logging request, client: X.X.X.X, server: ~^(?.+)\.mydomain\.com$, request: "GET / HTTP/1.1", host: "foobar.pagelines.com" I believe the problem is since I am configuring the server_name and root dynamically: server_name ~^(?.+)\.mydomain\.com$; root /srv/www/users/$user/wp; And obviously the root directory does not exist for foobar. Is there a way to suppress these errors, or maybe check if the root directory exists? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222733,222800#msg-222800 From nginxyz at mail.ru Tue Feb 21 19:11:37 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Tue, 21 Feb 2012 23:11:37 +0400 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <20120221160229.GB21114@aart.rice.edu> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> Message-ID: 21 ??????? 2012, 20:45 ?? Giuseppe Tofoni : > > In PHP I used crypt($pass, CRYPT_STD_DES) and I tried with the > following statement CRYPT_STD_DES is just a constant that indicates whether standard DES crypt() is availlable, so you should not use it as the salt - or if you do, the salt will be "1" (or "0" if standard DES crypt() is not available). You may want to use something like this instead: if (CRYPT_STD_DES == 1) { $salt = substr($username, 0, 2); $encrypted_password = crypt($password, $salt); } You should regenerate your .htpasswd file using this approach because the Apache htpasswd uses a random salt instead of the first two characters of the username, > > postgres_query "SELECT user FROM usertable WHERE user=$user AND > pwd=crypt($pass, substr(pwd, 1, 2))"; You should never use any part of whatever you're encrypting as the salt because it greatly reduces encryption strength / entropy. By using the first two characters of the password as the salt, you're revealing them because the salt is stored in the first two characters of the resulting crypt() hash: crypt("test", "te") generates "teH0wLIpW0gyQ" crypt("test", "XX") generates "XXF2OrGyU2fzk" So you may want to use something like this: postgres_query "SELECT user FROM usertable WHERE user=$user AND pwd=crypt($pass, substr($user, 1, 2))"; Max From piotr.sikora at frickle.com Tue Feb 21 19:19:31 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 21 Feb 2012 20:19:31 +0100 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <20120221160229.GB21114@aart.rice.edu> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> Message-ID: <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> Hi, >> postgres_query "SELECT user FROM usertable WHERE user=$user AND >> pwd=crypt($pass, substr(pwd, 1, 2))"; > > You should never use any part of whatever you're encrypting as the salt > because it greatly reduces encryption strength / entropy. By using the > first two characters of the password as the salt, you're revealing them > because the salt is stored in the first two characters of the resulting > crypt() hash: > > crypt("test", "te") generates "teH0wLIpW0gyQ" > crypt("test", "XX") generates "XXF2OrGyU2fzk" > > So you may want to use something like this: > > postgres_query "SELECT user FROM usertable WHERE user=$user AND > pwd=crypt($pass, substr($user, 1, 2))"; Except that "pwd" used in the above snipped is not password, but the hash stored in the database and "pwd=crypt($pass, pwd)" is the correct way to verify that "$pass" would evaluate to "pwd" hash (so that the password is correct). Best regards, Piotr Sikora < piotr.sikora at frickle.com > From gt0057 at gmail.com Tue Feb 21 19:22:11 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Tue, 21 Feb 2012 20:22:11 +0100 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> References: <20120221160229.GB21114@aart.rice.edu> <0B07E57D5566425782EC176CCC2EF7E4@Desktop> <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> Message-ID: Hi, The password is correct, the problem is postgresql vers. 9.0.3 not "nginx", es: authuser=# select crypt('multilab', '1$'), pwd from usertable where user ='multilab' ; crypt | pwd ---------------+--------------- 1$2NVPu8Urs82 | 1$Ln7ocLxd/.k (1 row) pwd =1$Ln7ocLxd/.k salt =1$ PHP calculated and in python crypt.crypt('multilab', pwd[:2] are are correct) Best regards Giuseppe 2012/2/21 Piotr Sikora : > Hi, > > >>> postgres_query ? ?"SELECT user FROM usertable WHERE user=$user AND >>> pwd=crypt($pass, substr(pwd, 1, 2))"; >> >> >> You should never use any part of whatever you're encrypting as the salt >> because it greatly reduces encryption strength / entropy. By using the >> first two characters of the password as the salt, you're revealing them >> because the salt is stored in the first two characters of the resulting >> crypt() hash: >> >> crypt("test", "te") generates "teH0wLIpW0gyQ" >> crypt("test", "XX") generates "XXF2OrGyU2fzk" >> >> So you may want to use something like this: >> >> postgres_query ? ?"SELECT user FROM usertable WHERE user=$user AND >> pwd=crypt($pass, substr($user, 1, 2))"; > > > Except that "pwd" used in the above snipped is not password, but the hash > stored in the database and "pwd=crypt($pass, pwd)" is the correct way to > verify that "$pass" would evaluate to "pwd" hash (so that the password is > correct). > > > Best regards, > Piotr Sikora < piotr.sikora at frickle.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginxyz at mail.ru Wed Feb 22 02:03:09 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Wed, 22 Feb 2012 06:03:09 +0400 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> <20120221160229.GB21114@aart.rice.edu> Message-ID: 21 ??????? 2012, 23:22 ?? Giuseppe Tofoni : > > The password is correct, the problem is postgresql vers. 9.0.3 not "nginx", > es: > > authuser=# select crypt('multilab', '1$'), pwd from usertable where > user ='multilab' ; > crypt | pwd > ---------------+--------------- > 1$2NVPu8Urs82 | 1$Ln7ocLxd/.k > (1 row) > > pwd =1$Ln7ocLxd/.k > salt =1$ > PHP calculated and in python crypt.crypt('multilab', pwd[:2] are are correct) No, they are not, because PHP and Python are using invalid salts, despite the fact that they shouldn't. Each value in the 0-63 range is represented by a printable salt character in the "./0-9A-Za-z" range. You are using an invalid salt character ('$'), which the Postgresql crypt() function silently maps to value 0, which is represented by the character '.' in the salt, so your '1$2NVPu8Urs82' hash is actually the result of crypt('multilab', '1.'), but with the original invalid salt '1$' prepended. According to the official PHP documentation, the PHP crypt() function should fail if the salt contains at least one invalid character, but it obviously doesn't, so you should make sure to verify the salt validity before calling the crypt() function. If your users are likely to have usernames that contain characters other than "./0-9A-Za-z", then you should use the Postgresql function gen_salt() instead of substr($user, 1, 2) when setting passwords: postgres_query "UPDATE usertable SET pwd=crypt($pass, gen_salt('des')) WHERE user=$user"; Max From nginx-forum at nginx.us Wed Feb 22 02:58:09 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 21 Feb 2012 21:58:09 -0500 (EST) Subject: Map FastCGI Ports Dynamically Message-ID: <7b93c1529261896b3fd29b7c9a4a80f0.NginxMailingListEnglish@forum.nginx.org> I have a config file that handles wildcard subdomains like: server_name ~^(?.+)\.mydomain\.com$; Then the variable $user contains the subdomain. The problem, is that I also need to dynamically set the port that fastcgi passes PHP work too since each user has their own PHP-FPM pool. fastcgi_pass php1.local.mydomain.com:XXXX; Is there a way to setup a hashmap of something like: user fastcgi-port -------- --------------- bob 9001 john 9002 kelly 9003 So that nginx can do some magic like: set $user_port = map($user); fastcgi_pass php1.local.mydomain.com:$user_port; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222820#msg-222820 From nginx-forum at nginx.us Wed Feb 22 03:51:58 2012 From: nginx-forum at nginx.us (atrus) Date: Tue, 21 Feb 2012 22:51:58 -0500 (EST) Subject: Proxy_cache recursive update. In-Reply-To: References: Message-ID: Thanks Bartenev. I have clear cache of the browser but it still using the old banner ! Atrus. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222769,222822#msg-222822 From edho at myconan.net Wed Feb 22 03:53:41 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 22 Feb 2012 10:53:41 +0700 Subject: [PARTIAL SOLVED] Re: Auth user with postgresql In-Reply-To: References: <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> <20120221160229.GB21114@aart.rice.edu> Message-ID: On Wed, Feb 22, 2012 at 9:03 AM, Max wrote: > > No, they are not, because PHP and Python are using invalid salts, despite > the fact that they shouldn't. Each value in the 0-63 range is represented > by a printable salt character in the "./0-9A-Za-z" range. You are using an > invalid salt character ('$'), which the Postgresql crypt() function silently > maps to value 0, which is represented by the character '.' in the salt, so > your '1$2NVPu8Urs82' hash is actually the result of crypt('multilab', '1.'), > but with the original invalid salt '1$' prepended. > > According to the official PHP documentation, the PHP crypt() function > should fail if the salt contains at least one invalid character, but > it obviously doesn't, so you should make sure to verify the salt > validity before calling the crypt() function. > > If your users are likely to have usernames that contain characters > other than "./0-9A-Za-z", then you should use the Postgresql function > gen_salt() instead of substr($user, 1, 2) when setting passwords: > > postgres_query "UPDATE usertable SET pwd=crypt($pass, gen_salt('des')) > WHERE user=$user"; > Don't forget that des password hashing is limited to 8 characters. Anything beyond that is ignored. $ echo '' | php adBh37ptDUT2o $ echo '' | php adBh37ptDUT2o It's better to use something more modern like bcrypt (gen_salt('bf', 8) in postgresql). If you want to hash it in php, import phpass[1] PasswordHash to get the gen_salt equivalent function since php doesn't seem to provide any. [1] http://www.openwall.com/phpass/ -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From nginx-forum at nginx.us Wed Feb 22 04:23:50 2012 From: nginx-forum at nginx.us (mfouwaaz) Date: Tue, 21 Feb 2012 23:23:50 -0500 (EST) Subject: php exits with 502 Bad Gateway In-Reply-To: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4cb5beb6774fd9cffab869eeb970aff7.NginxMailingListEnglish@forum.nginx.org> Hello fbhosted and Edho Arief I enabled php-fpm and things are looking ok for now. I hope it lasts. Thanks again for your help! Fouwaaz Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222625,222826#msg-222826 From caldcv at gmail.com Wed Feb 22 06:32:43 2012 From: caldcv at gmail.com (Chris) Date: Wed, 22 Feb 2012 01:32:43 -0500 Subject: php exits with 502 Bad Gateway In-Reply-To: <4cb5beb6774fd9cffab869eeb970aff7.NginxMailingListEnglish@forum.nginx.org> References: <655852b854ebe381ab0964562176713b.NginxMailingListEnglish@forum.nginx.org> <4cb5beb6774fd9cffab869eeb970aff7.NginxMailingListEnglish@forum.nginx.org> Message-ID: If you feel like playing with reliability, use CentminMod. It requires a clean install of CentOS, run the installer and it installs nginx, MariaDB (100% compatible MySQL database), php-fpm and other goodies. I recommend it. You just choose option 1 for the initial setup and option 2 for the domain. PS works really great on vBulletin From nginx-forum at nginx.us Wed Feb 22 07:00:11 2012 From: nginx-forum at nginx.us (feneyer) Date: Wed, 22 Feb 2012 02:00:11 -0500 (EST) Subject: Is this thread bug? Message-ID: hello,everyone: I have to use nginx's muti-thread model. Presently, i am using and modifying it. But i have some problems about nginx's thread processing: (1) in funcion "ngx_worker_thread_cycle", there is such codes which i think is a mistake: if (ngx_event_thread_process_posted(cycle) == NGX_ERROR) { return (ngx_thread_value_t) 1; } if (ngx_event_thread_process_posted(cycle) == NGX_ERROR) { return (ngx_thread_value_t) 1; } ==>why call twice? (2)in function "ngx_event_expire_timers", lock problem as followings: ngx_mutex_lock(ngx_event_timer_mutex); root = ngx_event_timer_rbtree.root; if (root == sentinel) { return; } ==>return without call ngx_mutex_unlock? (3) in function "ngx_epoll_process_events",lock problem too: before the code "call rev->handler(rev);" ,it has call "ngx_mutex_lock(ngx_posted_events_mutex)",but the handler recursively call other functions wich may need the lock("ngx_posted_events_mutex") too,then deadlock happens. I just simply modify it as follows: ngx_mutex_unlock(ngx_posted_events_mutex); //xxx rev->handler(rev); ngx_mutex_lock(ngx_posted_events_mutex); //xxx ===>whether this modifying will bring on other unconspicuous problems? Is there any solutions for nginx?s muti-threads using?Is there any plan to support muti-threads?about when? I am looking forward to your reply. Thanks a lot. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222829,222829#msg-222829 From ne at vbart.ru Wed Feb 22 08:05:56 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 22 Feb 2012 12:05:56 +0400 Subject: Map FastCGI Ports Dynamically In-Reply-To: <7b93c1529261896b3fd29b7c9a4a80f0.NginxMailingListEnglish@forum.nginx.org> References: <7b93c1529261896b3fd29b7c9a4a80f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201202221205.57126.ne@vbart.ru> On Wednesday 22 February 2012 06:58:09 justin wrote: > I have a config file that handles wildcard subdomains like: > > server_name ~^(?.+)\.mydomain\.com$; > > Then the variable $user contains the subdomain. The problem, is that I > also need to dynamically set the port that fastcgi passes PHP work too > since each user has their own PHP-FPM pool. > > fastcgi_pass php1.local.mydomain.com:XXXX; > > Is there a way to setup a hashmap of something like: > > user fastcgi-port > -------- --------------- > bob 9001 > john 9002 > kelly 9003 > > So that nginx can do some magic like: > > set $user_port = map($user); > > fastcgi_pass php1.local.mydomain.com:$user_port; > Do you know about the "map" directive? http://wiki.nginx.org/HttpMapModule wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Feb 22 08:55:33 2012 From: nginx-forum at nginx.us (piotr.pawlowski) Date: Wed, 22 Feb 2012 03:55:33 -0500 (EST) Subject: Stub_status explenation needed In-Reply-To: References: Message-ID: And everything is clear now, thank you Maxim ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222771,222834#msg-222834 From nginx-forum at nginx.us Wed Feb 22 09:24:36 2012 From: nginx-forum at nginx.us (zealot83) Date: Wed, 22 Feb 2012 04:24:36 -0500 (EST) Subject: Processing Chunked request/response Message-ID: <283e0006fda3767992a45bea7c51caf8.NginxMailingListEnglish@forum.nginx.org> Dear all, I heard that to process chunked request or response extra module or patch is necessary. I've met this log trying to upload a video file to nginx server: client sent "Transfer-Encoding: chunked" header while reading client request headers, client: xxxx., server: xxxx, request: "POST /xxx/xxx/xxx? HTTP/1.1", host: "xxx" And there is no other error log but the file in not uploaded. Is this problem able to be resolved using [https://github.com/agentzh/chunkin-nginx-module]? And about the chunked response, I wanna receive chunked streaming response on a nginx server. I found a patch by Igor on [http://www.ruby-forum.com/topic/152435]. But the version is too low. Is there any patch or module for higher version? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222835,222835#msg-222835 From nginx-forum at nginx.us Wed Feb 22 09:31:41 2012 From: nginx-forum at nginx.us (justin) Date: Wed, 22 Feb 2012 04:31:41 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <201202221205.57126.ne@vbart.ru> References: <201202221205.57126.ne@vbart.ru> Message-ID: Valentin, Yeah, but failing to put the pieces together. How would I use map to build a sort of dynamic table that stored username, port combos? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222836#msg-222836 From ne at vbart.ru Wed Feb 22 10:18:53 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 22 Feb 2012 14:18:53 +0400 Subject: Map FastCGI Ports Dynamically In-Reply-To: References: <201202221205.57126.ne@vbart.ru> Message-ID: <201202221418.53319.ne@vbart.ru> On Wednesday 22 February 2012 13:31:41 justin wrote: > Valentin, > > Yeah, but failing to put the pieces together. How would I use map to > build a sort of dynamic table that stored username, port combos? > server_name ~^(?.+)\.mydomain\.com$; > user fastcgi-port > -------- --------------- > bob 9001 > john 9002 > kelly 9003 map $user $fastcgi_port { default ''; bob 9001; john 9002; kelly 9003; } Did you read documentation? http://wiki.nginx.org/HttpMapModule wbr, Valentin V. Bartenev From c.kworr at gmail.com Wed Feb 22 10:04:18 2012 From: c.kworr at gmail.com (Volodymyr Kostyrko) Date: Wed, 22 Feb 2012 12:04:18 +0200 Subject: Map FastCGI Ports Dynamically In-Reply-To: References: <201202221205.57126.ne@vbart.ru> Message-ID: <4F44BDA2.3020605@gmail.com> justin wrote: > Valentin, > > Yeah, but failing to put the pieces together. How would I use map to > build a sort of dynamic table that stored username, port combos? Maybe sticking to unix sockets would be easier? Unix sockets are addressed by socket file name in base system and have no port so you would connect to something like: fastcgi_pass unix:/home/$user/www/.fastcgi.php.socket; or fastcgi_pass unix:/tmp/.fastcgi.$user.php.socket; Unix sockets are also known as faster and secure way for managing local connections. -- Sphinx of black quartz judge my vow. From mp3geek at gmail.com Wed Feb 22 13:31:31 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Thu, 23 Feb 2012 02:31:31 +1300 Subject: Using nginx 1.1 with the intel compiler Message-ID: Not sure what I'm doing wrong here.. [root at bob:~/trunk]# export | grep cc CC=icc LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 [root at bob:~/trunk]# ./configure checking for OS + Linux 3.2.5 i686 checking for C compiler ... not found ./configure: error: C compiler icc is not found Even if I specify it, [root at bob:~/trunk]# ./configure --with-cc=/opt/intel/bin/icc checking for OS + Linux 3.2.5 i686 checking for C compiler ... not found ./configure: error: C compiler /opt/intel/bin/icc is not found And just specifying "icc" instead" [root at bob:~/trunk]# ./configure --with-cc=icc checking for OS + Linux 3.2.5 i686 checking for C compiler ... not found ./configure: error: C compiler icc is not found hmm still not found, its in the path: [root at bob:~/trunk]# icc --version icc (ICC) 12.1.3 20120212 Copyright (C) 1985-2012 Intel Corporation. All rights reserved. From mdounin at mdounin.ru Wed Feb 22 13:45:57 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Feb 2012 17:45:57 +0400 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: Message-ID: <20120222134556.GV67687@mdounin.ru> Hello! On Thu, Feb 23, 2012 at 02:31:31AM +1300, Ryan Brown wrote: > Not sure what I'm doing wrong here.. > > [root at bob:~/trunk]# export | grep cc > CC=icc > LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 > LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 > > [root at bob:~/trunk]# ./configure > checking for OS > + Linux 3.2.5 i686 > checking for C compiler ... not found > > ./configure: error: C compiler icc is not found > > Even if I specify it, > > [root at bob:~/trunk]# ./configure --with-cc=/opt/intel/bin/icc > checking for OS > + Linux 3.2.5 i686 > checking for C compiler ... not found > > ./configure: error: C compiler /opt/intel/bin/icc is not found > > And just specifying "icc" instead" > > [root at bob:~/trunk]# ./configure --with-cc=icc > checking for OS > + Linux 3.2.5 i686 > checking for C compiler ... not found > > ./configure: error: C compiler icc is not found > > hmm still not found, its in the path: > > [root at bob:~/trunk]# icc --version > icc (ICC) 12.1.3 20120212 > Copyright (C) 1985-2012 Intel Corporation. All rights reserved. Try looking into objs/autoconf.err, it will have exact reason for the "not found" verdict. Most likely it fails to compile code for some reason, the autoconf.err file should have details. Maxim Dounin From mdounin at mdounin.ru Wed Feb 22 13:53:10 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Feb 2012 17:53:10 +0400 Subject: Is this thread bug? In-Reply-To: References: Message-ID: <20120222135310.GW67687@mdounin.ru> Hello! On Wed, Feb 22, 2012 at 02:00:11AM -0500, feneyer wrote: > hello,everyone: > I have to use nginx's muti-thread model. Presently, i am using and > modifying it. But i have some problems about nginx's thread processing: [...] Threads support is obsolete, broken and should be removed. Just ignore it. Maxim Dounin From mp3geek at gmail.com Wed Feb 22 14:02:12 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Thu, 23 Feb 2012 03:02:12 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: <20120222134556.GV67687@mdounin.ru> References: <20120222134556.GV67687@mdounin.ru> Message-ID: I'm guessing its similar to the openssl compile, which I used http://software.intel.com/en-us/forums/showthread.php?t=101266 [root at bob:trunk/objs]# locate sys/types.h /usr/include/i386-linux-gnu/sys/types.h Not sure how to pass to nginx to use, (this fails) ./configure --with-cc=/opt/intel/bin/icc --with-cc-opt=-I/usr/include/i386-linux-gnu/ ---------------------------------------- checking for C compiler objs/autotest.c(2): catastrophic error: cannot open source file "sys/types.h" #include ^ compilation aborted for objs/autotest.c (code 4) ---------- #include int main() { ; return 0; } ---------- icc -o objs/autotest objs/autotest.c ---------- On Thu, Feb 23, 2012 at 2:45 AM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 23, 2012 at 02:31:31AM +1300, Ryan Brown wrote: > >> Not sure what I'm doing wrong here.. >> >> [root at bob:~/trunk]# export | grep cc >> CC=icc >> LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 >> LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 >> >> [root at bob:~/trunk]# ./configure >> checking for OS >> ?+ Linux 3.2.5 i686 >> checking for C compiler ... not found >> >> ./configure: error: C compiler icc is not found >> >> Even if I specify it, >> >> [root at bob:~/trunk]# ./configure --with-cc=/opt/intel/bin/icc >> checking for OS >> ?+ Linux 3.2.5 i686 >> checking for C compiler ... not found >> >> ./configure: error: C compiler /opt/intel/bin/icc is not found >> >> And just specifying "icc" instead" >> >> [root at bob:~/trunk]# ./configure --with-cc=icc >> checking for OS >> ?+ Linux 3.2.5 i686 >> checking for C compiler ... not found >> >> ./configure: error: C compiler icc is not found >> >> hmm still not found, its in the path: >> >> [root at bob:~/trunk]# icc --version >> icc (ICC) 12.1.3 20120212 >> Copyright (C) 1985-2012 Intel Corporation. ?All rights reserved. > > Try looking into objs/autoconf.err, it will have exact reason for > the "not found" verdict. ?Most likely it fails to compile code for > some reason, the autoconf.err file should have details. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From simone.fumagalli at contactlab.com Wed Feb 22 16:05:32 2012 From: simone.fumagalli at contactlab.com (Simone Fumagalli) Date: Wed, 22 Feb 2012 17:05:32 +0100 Subject: Order of HTTP headers change cache behaviour Message-ID: <4F45124C.2080603@contactlab.com> Hello. Looking at my log I've noticed that some pages were not cached. After a bit of debugging I've noticed that the order in which the HTTP header are returned from Apache change how NGINX this save the page in the cache So a page with this code is *NOT* cached if I comment/delete the line header('Expires: Wed, 11 Jan 1984 05:00:00 GMT'); the page is cached. Even if I move header("X-Accel-Expires: 600") before header('Expires: Wed, 11 Jan 1984 05:00:00 GMT'); the page is cached Same thing happen for the header Cache-Control. If is before X-Accel-Expires the page is not cached if is after it is ! The conf for NGINX is very standard and proxy_cache_valid is not specified. Is this correct ? Where am I wrong ? Thanks -- Simone From mdounin at mdounin.ru Wed Feb 22 16:28:28 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Feb 2012 20:28:28 +0400 Subject: Order of HTTP headers change cache behaviour In-Reply-To: <4F45124C.2080603@contactlab.com> References: <4F45124C.2080603@contactlab.com> Message-ID: <20120222162828.GB67687@mdounin.ru> Hello! On Wed, Feb 22, 2012 at 05:05:32PM +0100, Simone Fumagalli wrote: > Hello. > > Looking at my log I've noticed that some pages were not cached. After a bit of debugging I've noticed that the order in which the HTTP header are returned from Apache change how NGINX this save the page in the cache > > So a page with this code is *NOT* cached > > > header('Content-Type: text/html; charset=UTF-8'); > header('Last-Modified: Wed, 22 Feb 2012 14:44:11 GMT'); > header('Expires: Wed, 11 Jan 1984 05:00:00 GMT'); > header("X-Accel-Expires: 600"); > header('Cache-Control: no-cache, must-revalidate, max-age=0'); > header('Pragma: no-cache'); > > Echo "Hello World !"; > > ?> > > if I comment/delete the line header('Expires: Wed, 11 Jan 1984 05:00:00 GMT'); the page is cached. > > Even if I move header("X-Accel-Expires: 600") before header('Expires: Wed, 11 Jan 1984 05:00:00 GMT'); the page is cached > Same thing happen for the header Cache-Control. If is before X-Accel-Expires the page is not cached if is after it is ! > > The conf for NGINX is very standard and proxy_cache_valid is not specified. > > Is this correct ? Where am I wrong ? This probably should be counted as a bug (or at least misfeature). Here are what happens: if the Expires header comes first, it disables caching due to being set to date in the past. The X-Accel-Expires header which comes later can't re-enable caching. On the other hand, if X-Accel-Expires comes first, it will set cache expiration time. The Expires and Cache-Control headers later will be just ignored as cache expiration time is already set. As a workaround you may want to always sent X-Accel-Expires first, or explicitly ignore other headers with proxy_ignore_headers. Maxim Dounin From simone.fumagalli at contactlab.com Wed Feb 22 16:39:22 2012 From: simone.fumagalli at contactlab.com (Simone Fumagalli) Date: Wed, 22 Feb 2012 17:39:22 +0100 Subject: Order of HTTP headers change cache behaviour In-Reply-To: <20120222162828.GB67687@mdounin.ru> References: <4F45124C.2080603@contactlab.com> <20120222162828.GB67687@mdounin.ru> Message-ID: <4F451A3A.2000908@contactlab.com> On 02/22/2012 05:28 PM, Maxim Dounin wrote: > This probably should be counted as a bug (or at least misfeature). Ciao, do you think you will fix/change this behaviour ? Just to know if and how change my configurations thanks -- Simone From mdounin at mdounin.ru Wed Feb 22 17:04:22 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Feb 2012 21:04:22 +0400 Subject: Order of HTTP headers change cache behaviour In-Reply-To: <4F451A3A.2000908@contactlab.com> References: <4F45124C.2080603@contactlab.com> <20120222162828.GB67687@mdounin.ru> <4F451A3A.2000908@contactlab.com> Message-ID: <20120222170422.GD67687@mdounin.ru> Hello! On Wed, Feb 22, 2012 at 05:39:22PM +0100, Simone Fumagalli wrote: > On 02/22/2012 05:28 PM, Maxim Dounin wrote: > > This probably should be counted as a bug (or at least misfeature). > > Ciao, do you think you will fix/change this behaviour ? > Just to know if and how change my configurations Eventually this will be fixed, but there are no immediate plans to and it's relatively low priority. Maxim Dounin From nginx-forum at nginx.us Wed Feb 22 17:56:12 2012 From: nginx-forum at nginx.us (justin) Date: Wed, 22 Feb 2012 12:56:12 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <201202221418.53319.ne@vbart.ru> References: <201202221418.53319.ne@vbart.ru> Message-ID: <766162692ca1d75031c9a1a913560551.NginxMailingListEnglish@forum.nginx.org> Volodymyr, Thanks, but the fastscgi backend is actually a different server, so have to use TCP. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222863#msg-222863 From mit at stagename.com Wed Feb 22 18:12:04 2012 From: mit at stagename.com (Mit Rowe) Date: Wed, 22 Feb 2012 13:12:04 -0500 Subject: Release Schedule Message-ID: I've become very interested in the 1.1 branch recently, particularly it's http/1.1 support in proxy mode. Is there a rough timeline already proposed for the promotion of the 1.1 branch to 'production'? -- Will 'Mit' Rowe Stagename* *1-866-326-3098 mit at stagename.com www.stagename.com Twitter: @stagename *The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of this information by persons or entities other than the intended recipient is prohibited. If you received this transmission in error, please contact the sender and delete all material contained herein from your computer.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyril.lavier at davromaniak.eu Wed Feb 22 18:16:09 2012 From: cyril.lavier at davromaniak.eu (Cyril Lavier) Date: Wed, 22 Feb 2012 19:16:09 +0100 Subject: Release Schedule In-Reply-To: References: Message-ID: <4F4530E9.3090809@davromaniak.eu> On 02/22/2012 07:12 PM, Mit Rowe wrote: > I've become very interested in the 1.1 branch recently, particularly > it's http/1.1 support in proxy mode. > > Is there a rough timeline already proposed for the promotion of the > 1.1 branch to 'production'? > > > > -- > Will 'Mit' Rowe > Stagename/ > /1-866-326-3098 > mit at stagename.com > www.stagename.com > Twitter: @stagename > > /The information transmitted is intended only for the person or entity > to which it is addressed and may contain confidential and/or > privileged material. Any review, retransmission, dissemination or > other use of this information by persons or entities other than the > intended recipient is prohibited. If you received this transmission in > error, please contact the sender and delete all material contained > herein from your computer./ > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Mit. A roadmap is available on the trac : http://trac.nginx.org/nginx/roadmap We can have it in iCalendar format, so I added the URL to lightning in Thunderbird ^^ By the way, as the 1.1.17 is due in 3 weeks, I think you can start on testing it in a dev/pre-production environment. Thanks. -- Cyril "Davromaniak" Lavier KeyID 59E9A881 http://www.davromaniak.eu From ne at vbart.ru Wed Feb 22 18:42:30 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 22 Feb 2012 22:42:30 +0400 Subject: Release Schedule In-Reply-To: References: Message-ID: <201202222242.30554.ne@vbart.ru> On Wednesday 22 February 2012 22:12:04 Mit Rowe wrote: > I've become very interested in the 1.1 branch recently, particularly it's > http/1.1 support in proxy mode. > > Is there a rough timeline already proposed for the promotion of the 1.1 > branch to 'production'? Don't hesitate to use development branch in production. The main difference between the branches is about API and behavior stability, not reliability. Moreover, most of bugfixes hit devel branch slightly earlier then stable. wbr, Valentin V. Bartenev From ne at vbart.ru Wed Feb 22 18:49:09 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 22 Feb 2012 22:49:09 +0400 Subject: Release Schedule In-Reply-To: <201202222242.30554.ne@vbart.ru> References: <201202222242.30554.ne@vbart.ru> Message-ID: <201202222249.09990.ne@vbart.ru> On Wednesday 22 February 2012 22:42:30 Valentin V. Bartenev wrote: > On Wednesday 22 February 2012 22:12:04 Mit Rowe wrote: > > I've become very interested in the 1.1 branch recently, particularly it's > > http/1.1 support in proxy mode. > > > > Is there a rough timeline already proposed for the promotion of the 1.1 > > branch to 'production'? > > Don't hesitate to use development branch in production. The main difference > between the branches is about API and behavior stability, not reliability. > > Moreover, most of bugfixes hit devel branch slightly earlier then stable. > btw, even svn "trunk" we are trying to maintain stable as a rock. ;) wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Feb 22 20:16:15 2012 From: nginx-forum at nginx.us (double) Date: Wed, 22 Feb 2012 15:16:15 -0500 (EST) Subject: limit_req does not work Message-ID: <2445d1adf885d741a5e6f2924f386abd.NginxMailingListEnglish@forum.nginx.org> Nginx 1.1.15 does not block request if I use "try_files". This is my markup (simplified): nginx.conf: http { limit_req_zone $binary_remote_addr zone=zone1:32m rate=2r/s; limit_req_zone $binary_remote_addr zone=zone2:32m rate=12r/m; server { location @backend { limit_req zone=zone1 burst=10; limit_req zone=zone2 burst=100 nodelay; [...] fastcgi_pass unix:/var/run/fastcgi/dispatch.sock; [...] } location / { try_files $uri @backend; expires max; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222872,222872#msg-222872 From nginx-forum at nginx.us Wed Feb 22 20:32:54 2012 From: nginx-forum at nginx.us (JerW) Date: Wed, 22 Feb 2012 15:32:54 -0500 (EST) Subject: 502 Bad Gateway - Wordpress Message-ID: Greetings, I'm having another issue with my nginx + php-fpm + mysql install. I've been having a lot of issues today after fixing my previous one and I've finally decided that I am stumped on this one. Ok, so I am trying to install WordPress on my server that I have nginx + php-fpm installed on but every time I navigate to http://mydomain.com/wp-admin/install.php I recieve an error: 502 Bad Gateway. I checked out my error logs and I saw this: 2012/02/22 10:02:54 [error] 2801#0: *114 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: m.y.i.p, server: mydomain.com, request: "GET /wp-admin/install.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "mydomain.com" I have checked to see if php-fpm is running and is listening on port 9000 and it is in fact running and listening. I can view phpinfo() pages just fine. I did have my php-fpm set up to listen on a unix socket(?) and it was doing that so I switched back to TCP and it's still doing it, I receive the same error for both of them only with the different upstream. Any help would be greatly appreciate, I've had such troubles today.. Thanks, Jeremiah! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222873,222873#msg-222873 From nginx-forum at nginx.us Wed Feb 22 21:16:33 2012 From: nginx-forum at nginx.us (justin) Date: Wed, 22 Feb 2012 16:16:33 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <4F44BDA2.3020605@gmail.com> References: <4F44BDA2.3020605@gmail.com> Message-ID: Valentin: Here is a snippet of the config: server { listen 80; map $user $user_fastcgi_port { default 9000; bob 9001; john 9002; kelly 9003; } server_name $user\.my\.domain\.com$; root /srv/www/users/$user/wp; index index.php; access_log /var/log/nginx/users/$user.access.log; error_log /var/log/nginx/error.log; include /etc/nginx/excludes.conf; include /etc/nginx/wordpress.conf; include /etc/nginx/expires.conf; } I am getting the following error though: nginx: [emerg] "map" directive is not allowed here in /etc/nginx/conf.d/core.conf:4 nginx: configuration file /etc/nginx/nginx.conf test failed Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222874#msg-222874 From nginx-forum at nginx.us Wed Feb 22 21:25:05 2012 From: nginx-forum at nginx.us (JerW) Date: Wed, 22 Feb 2012 16:25:05 -0500 (EST) Subject: 502 Bad Gateway - Wordpress In-Reply-To: References: Message-ID: <3f1d6698f3600102a1149e1be4c65c47.NginxMailingListEnglish@forum.nginx.org> I have since changed AGAIN from using the TCP port of 9000 to the unix socket. Here is the new error line: 2012/02/22 11:20:32 [error] 2504#0: *26 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: m.y.i.p, server: mydomain.com, request: "GET /wp-admin/install.php HTTP/1.1", upstream: "fastcgi://unix:/tmp/php5-fpm.sock:", host: "mydomain.com" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222873,222875#msg-222875 From mp3geek at gmail.com Wed Feb 22 21:36:12 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Thu, 23 Feb 2012 10:36:12 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> Message-ID: Okay, manage to get it to compile, make[1]: Entering directory `/root/trunk' /opt/intel/bin/icc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_string.o \ src/core/ngx_string.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type state = 0; ^ compilation aborted for src/core/ngx_string.c (code 2) make[1]: *** [objs/src/core/ngx_string.o] Error 2 On Thu, Feb 23, 2012 at 3:02 AM, Ryan Brown wrote: > I'm guessing its similar to the openssl compile, which I used > http://software.intel.com/en-us/forums/showthread.php?t=101266 > > [root at bob:trunk/objs]# locate sys/types.h > /usr/include/i386-linux-gnu/sys/types.h > > Not sure how to pass to nginx to use, (this fails) > > ./configure --with-cc=/opt/intel/bin/icc > --with-cc-opt=-I/usr/include/i386-linux-gnu/ > > > ---------------------------------------- > checking for C compiler > > objs/autotest.c(2): catastrophic error: cannot open source file "sys/types.h" > ?#include > ? ? ? ? ? ? ? ? ? ? ? ?^ > > compilation aborted for objs/autotest.c (code 4) > ---------- > > #include > > > > int main() { > ? ?; > ? ?return 0; > } > > ---------- > icc -o objs/autotest objs/autotest.c > ---------- > > > > On Thu, Feb 23, 2012 at 2:45 AM, Maxim Dounin wrote: >> Hello! >> >> On Thu, Feb 23, 2012 at 02:31:31AM +1300, Ryan Brown wrote: >> >>> Not sure what I'm doing wrong here.. >>> >>> [root at bob:~/trunk]# export | grep cc >>> CC=icc >>> LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 >>> LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 >>> >>> [root at bob:~/trunk]# ./configure >>> checking for OS >>> ?+ Linux 3.2.5 i686 >>> checking for C compiler ... not found >>> >>> ./configure: error: C compiler icc is not found >>> >>> Even if I specify it, >>> >>> [root at bob:~/trunk]# ./configure --with-cc=/opt/intel/bin/icc >>> checking for OS >>> ?+ Linux 3.2.5 i686 >>> checking for C compiler ... not found >>> >>> ./configure: error: C compiler /opt/intel/bin/icc is not found >>> >>> And just specifying "icc" instead" >>> >>> [root at bob:~/trunk]# ./configure --with-cc=icc >>> checking for OS >>> ?+ Linux 3.2.5 i686 >>> checking for C compiler ... not found >>> >>> ./configure: error: C compiler icc is not found >>> >>> hmm still not found, its in the path: >>> >>> [root at bob:~/trunk]# icc --version >>> icc (ICC) 12.1.3 20120212 >>> Copyright (C) 1985-2012 Intel Corporation. ?All rights reserved. >> >> Try looking into objs/autoconf.err, it will have exact reason for >> the "not found" verdict. ?Most likely it fails to compile code for >> some reason, the autoconf.err file should have details. >> >> Maxim Dounin >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Feb 22 21:41:01 2012 From: nginx-forum at nginx.us (JerW) Date: Wed, 22 Feb 2012 16:41:01 -0500 (EST) Subject: 502 Bad Gateway - Wordpress In-Reply-To: References: Message-ID: I am also receiving this error on another vhost that I have set up only that one is running Joomla. I am receiving the same exact error with this one only with the different domain name. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222873,222878#msg-222878 From ne at vbart.ru Wed Feb 22 22:06:08 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 23 Feb 2012 02:06:08 +0400 Subject: Map FastCGI Ports Dynamically In-Reply-To: References: <4F44BDA2.3020605@gmail.com> Message-ID: <201202230206.08826.ne@vbart.ru> On Thursday 23 February 2012 01:16:33 justin wrote: > Valentin: > > Here is a snippet of the config: > > server { > listen 80; > > map $user $user_fastcgi_port { > default 9000; > bob 9001; > john 9002; > kelly 9003; > } > > server_name $user\.my\.domain\.com$; > > root /srv/www/users/$user/wp; > > index index.php; > > access_log /var/log/nginx/users/$user.access.log; > error_log /var/log/nginx/error.log; > > include /etc/nginx/excludes.conf; > include /etc/nginx/wordpress.conf; > include /etc/nginx/expires.conf; > } > > I am getting the following error though: > > nginx: [emerg] "map" directive is not allowed here in > /etc/nginx/conf.d/core.conf:4 > nginx: configuration file /etc/nginx/nginx.conf test failed > Yeap, if you look at the documentation, you will see: map - syntax: map $var1 $var2 { ... } - default: none - context: http Please, notice that "context: http", therefore you must define "map" block in the "http" context, not the "server" one. And error log message is trying to give a hint. http://wiki.nginx.org/HttpMapModule wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Feb 22 22:24:55 2012 From: nginx-forum at nginx.us (justin) Date: Wed, 22 Feb 2012 17:24:55 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <201202230206.08826.ne@vbart.ru> References: <201202230206.08826.ne@vbart.ru> Message-ID: <28ccbf289d7d86d252680e1b07a690f6.NginxMailingListEnglish@forum.nginx.org> Valentin, Would you have a bit of time to assist personally, would only take you less than an hour and we can paypal for your time? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222881#msg-222881 From gt0057 at gmail.com Wed Feb 22 22:33:05 2012 From: gt0057 at gmail.com (Giuseppe Tofoni) Date: Wed, 22 Feb 2012 23:33:05 +0100 Subject: [SOLVED] Re: Auth user with postgresql In-Reply-To: References: <4A0A9025BC6E4249B02EC84CB122B5EC@Desktop> <20120221160229.GB21114@aart.rice.edu> Message-ID: Hi all, Thanks to everyone who helped me to solve the problem. I tried with these three solutions and they worked perfectly. PHP and Postgresql $pass =crypt($password, '$1$') UPDATE usertable SET pwd='$pass' WHERE user='$user'; Postgresql only UPDATE usertable SET pwd=crypt('mypass', gen_salt('md5')) WHERE user='username'; Nginx postgres_query "SELECT user FROM usertable WHERE user=$user AND pwd=crypt($pass, pwd)"; Best Regards Giuseppe 2012/2/22 Edho Arief : > On Wed, Feb 22, 2012 at 9:03 AM, Max wrote: >> >> No, they are not, because PHP and Python are using invalid salts, despite >> the fact that they shouldn't. Each value in the 0-63 range is represented >> by a printable salt character in the "./0-9A-Za-z" range. You are using an >> invalid salt character ('$'), which the Postgresql crypt() function silently >> maps to value 0, which is represented by the character '.' in the salt, so >> your '1$2NVPu8Urs82' hash is actually the result of crypt('multilab', '1.'), >> but with the original invalid salt '1$' prepended. >> >> According to the official PHP documentation, the PHP crypt() function >> should fail if the salt contains at least one invalid character, but >> it obviously doesn't, so you should make sure to verify the salt >> validity before calling the crypt() function. >> >> If your users are likely to have usernames that contain characters >> other than "./0-9A-Za-z", then you should use the Postgresql function >> gen_salt() instead of substr($user, 1, 2) when setting passwords: >> >> postgres_query "UPDATE usertable SET pwd=crypt($pass, gen_salt('des')) >> WHERE user=$user"; >> > > Don't forget that des password hashing is limited to 8 characters. > Anything beyond that is ignored. > > $ echo '' | php > adBh37ptDUT2o > $ echo '' | php > adBh37ptDUT2o > > It's better to use something more modern like bcrypt (gen_salt('bf', > 8) in postgresql). If you want to hash it in php, import phpass[1] > PasswordHash to get the gen_salt equivalent function since php doesn't > seem to provide any. > > [1] http://www.openwall.com/phpass/ > > -- > O< ascii ribbon campaign - stop html mail - www.asciiribbon.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ne at vbart.ru Thu Feb 23 00:53:52 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 23 Feb 2012 04:53:52 +0400 Subject: Map FastCGI Ports Dynamically In-Reply-To: <28ccbf289d7d86d252680e1b07a690f6.NginxMailingListEnglish@forum.nginx.org> References: <201202230206.08826.ne@vbart.ru> <28ccbf289d7d86d252680e1b07a690f6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201202230453.52368.ne@vbart.ru> On Thursday 23 February 2012 02:24:55 justin wrote: > Valentin, > > Would you have a bit of time to assist personally, would only take you > less than an hour and we can paypal for your time? > You could make an inquiry by email: nginx-inquiries at nginx.com wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Feb 23 01:26:07 2012 From: nginx-forum at nginx.us (feneyer) Date: Wed, 22 Feb 2012 20:26:07 -0500 (EST) Subject: Is this thread bug? In-Reply-To: References: Message-ID: <5f15e42dcc04564e1f9fe8c8381342e1.NginxMailingListEnglish@forum.nginx.org> Hello, Maxim : I do need to use nginx's muti-thread model, is there any suggestion for modifying? Looking forward. feneyer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222829,222886#msg-222886 From nginx-forum at nginx.us Thu Feb 23 01:28:04 2012 From: nginx-forum at nginx.us (justin) Date: Wed, 22 Feb 2012 20:28:04 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <201202230453.52368.ne@vbart.ru> References: <201202230453.52368.ne@vbart.ru> Message-ID: <0e194af93055a357db12d3acbd1b065b.NginxMailingListEnglish@forum.nginx.org> Valentin, Ok, well I am almost there. The problem is since we need to define the map block outside of server, the $user variable is not defined yet. I.E. http { # $user is not defined yet, it comes from inside server {} map $user $user_fastcgi_port { default 9000; justin 9001; simon 9002; } } server { # Set's $user server_name ~^(?.+)\.my\.domain\.com$; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222887#msg-222887 From nginx-forum at nginx.us Thu Feb 23 05:45:02 2012 From: nginx-forum at nginx.us (junius wang) Date: Thu, 23 Feb 2012 00:45:02 -0500 (EST) Subject: Rewriting uri as querystring using nginx? Message-ID: <97a21bcbf3c43747fdfd4763bb05021a.NginxMailingListEnglish@forum.nginx.org> How do I rewrite URIs of the form http://domain1/one/two?three=four to http://domain2?path=http%3A%2F%2Fdomain1%2Fone%2Ftwo%3Fthree%3Dfour using nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222889,222889#msg-222889 From edho at myconan.net Thu Feb 23 07:10:33 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 23 Feb 2012 14:10:33 +0700 Subject: Rewriting uri as querystring using nginx? In-Reply-To: <97a21bcbf3c43747fdfd4763bb05021a.NginxMailingListEnglish@forum.nginx.org> References: <97a21bcbf3c43747fdfd4763bb05021a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4F45E669.2090300@myconan.net> On 2012-02-23 12:45, junius wang wrote: > How do I rewrite URIs of the form > > http://domain1/one/two?three=four > > to > > http://domain2?path=http%3A%2F%2Fdomain1%2Fone%2Ftwo%3Fthree%3Dfour > Perhaps something like this: rewrite ^ http://domain2?path=http://domain1$request_uri$is_args$args permanent; Not sure about the escapes though. From ne at vbart.ru Thu Feb 23 09:53:58 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 23 Feb 2012 13:53:58 +0400 Subject: Map FastCGI Ports Dynamically In-Reply-To: <0e194af93055a357db12d3acbd1b065b.NginxMailingListEnglish@forum.nginx.org> References: <201202230453.52368.ne@vbart.ru> <0e194af93055a357db12d3acbd1b065b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201202231353.59170.ne@vbart.ru> On Thursday 23 February 2012 05:28:04 justin wrote: > Valentin, > > Ok, well I am almost there. The problem is since we need to define the > map block outside of server, the $user variable is not defined yet. > I.E. > > http { > # $user is not defined yet, it comes from inside server {} > map $user $user_fastcgi_port { > default 9000; > justin 9001; > simon 9002; > } > } > > server { > # Set's $user > server_name ~^(?.+)\.my\.domain\.com$; > } > Did you try, or it's just your guess? There's no "variable scope" concept in nginx configuration. And it mostly has declarative nature, therefore, the order of most directives is not important, unless it is specifically documented. btw, your example config also wrong. If look at the documentation, you will see: server syntax: server { ... } default: ? context: http http://nginx.org/en/docs/http/ngx_http_core_module.html#server wbr, Valentin V. Bartenev From c.kworr at gmail.com Thu Feb 23 08:02:30 2012 From: c.kworr at gmail.com (Volodymyr Kostyrko) Date: Thu, 23 Feb 2012 10:02:30 +0200 Subject: Map FastCGI Ports Dynamically In-Reply-To: <0e194af93055a357db12d3acbd1b065b.NginxMailingListEnglish@forum.nginx.org> References: <201202230453.52368.ne@vbart.ru> <0e194af93055a357db12d3acbd1b065b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4F45F296.6050702@gmail.com> justin wrote: > Valentin, > > Ok, well I am almost there. The problem is since we need to define the > map block outside of server, the $user variable is not defined yet. > I.E. First. You are already in `http`, so adding another http block would not help you. > > http { > # $user is not defined yet, it comes from inside server {} > map $user $user_fastcgi_port { > default 9000; > justin 9001; > simon 9002; > } > } So why not map to $http_host? map $http_host $user_fastcgi_port { default 9000; justin.my.domain.com 9001; simon.my.domain.com 9002; } You'll save yourself one regexp search. > > server { > # Set's $user > server_name ~^(?.+)\.my\.domain\.com$; > } -- Sphinx of black quartz judge my vow. From appa at perusio.net Thu Feb 23 16:02:20 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Thu, 23 Feb 2012 17:02:20 +0100 Subject: Rewriting uri as querystring using nginx? In-Reply-To: <97a21bcbf3c43747fdfd4763bb05021a.NginxMailingListEnglish@forum.nginx.org> References: <97a21bcbf3c43747fdfd4763bb05021a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87vcmxfshf.wl%appa@perusio.net> On 23 Fev 2012 06h45 CET, nginx-forum at nginx.us wrote: > How do I rewrite URIs of the form > > http://domain1/one/two?three=four > > to > > http://domain2?path=http%3A%2F%2Fdomain1%2Fone%2Ftwo%3Fthree%3Dfour To escape you need to do a capture. rewrite ^(.*)$ http://domain2?path=$1; --- appa From nginx-forum at nginx.us Thu Feb 23 16:45:59 2012 From: nginx-forum at nginx.us (JerW) Date: Thu, 23 Feb 2012 11:45:59 -0500 (EST) Subject: 502 Bad Gateway - Wordpress In-Reply-To: References: Message-ID: <4a627e25ce0e647a21e2a29f440bf97e.NginxMailingListEnglish@forum.nginx.org> Nevermind, I fixed this. Not sure how exactly but I am now able to serve php files just fine!!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222873,222903#msg-222903 From nginx-forum at nginx.us Thu Feb 23 19:23:38 2012 From: nginx-forum at nginx.us (justin) Date: Thu, 23 Feb 2012 14:23:38 -0500 (EST) Subject: Map FastCGI Ports Dynamically In-Reply-To: <201202231353.59170.ne@vbart.ru> References: <201202231353.59170.ne@vbart.ru> Message-ID: <5ac39748e0506d10c30f80868d3545d0.NginxMailingListEnglish@forum.nginx.org> Volodymyr, Thanks for the reply. map $http_host $user_fastcgi_port { default 9000; justin.my.domain.com 9001; simon.my.domain.com 9002; } server_name ~^(?.+)\.my\.domain\.com$; fastcgi_pass php1.local.domain.com:$user_fastcgi_port; Getting 502 Bad Gateway. How can I debug and see what the value of $user_fastcgi_port is being set as? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222820,222906#msg-222906 From nginx-forum at nginx.us Thu Feb 23 21:12:49 2012 From: nginx-forum at nginx.us (caleboconnell) Date: Thu, 23 Feb 2012 16:12:49 -0500 (EST) Subject: can't run php in aliased directory outside the root directory Message-ID: A quick explanation as to what I'm trying to accomplish. We have a website where we want to keep a media folder outside the root. This holds all of our large images and videos and mp3 files. We don't want to keep this in our git repo for deployment since it's only used on the live site. We've used root to change the root location of a path name. Recently we've wanted to store some php files in this media folder, in a sub-directory to work as front ends to some large swf files we're hosting. I've added an additional location block to hopefully match the request to this sub-directory in the media folder to host some php files. When I go to the page, instead of processing the php, it just downloads the file. Below is the config section of our sites-available file that defines the site. The actual path to the PHP fie we want to run is: /var/www/media/courses/OCT-CIPPE/player.php. The path will always be /var/www/media/courses but the OCT-CIPPE will change based on the program we want to host. The php file will always be titled player.php. Any help/suggestions as to how to change this so it will see the php file as a file it passes to php5-fpm. ## Web conference alias and flash video settings location ^~ /media { root /var/www; } location ~ /media/courses/.*\.php$ { root /var/www; if ($fastcgi_script_name ~ /media/courses(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222911,222911#msg-222911 From francis at daoine.org Thu Feb 23 22:37:16 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 23 Feb 2012 22:37:16 +0000 Subject: can't run php in aliased directory outside the root directory In-Reply-To: References: Message-ID: <20120223223716.GA3976@craic.sysops.org> On Thu, Feb 23, 2012 at 04:12:49PM -0500, caleboconnell wrote: Hi there, There is an issue with using php in an aliased location; but you don't seem to use "alias" in your configuration at all, so that's not the problem here. > The actual path to the PHP fie we want to run is: > /var/www/media/courses/OCT-CIPPE/player.php. > The path will always be /var/www/media/courses but the OCT-CIPPE will > change based on the program we want to host. The php file will always > be titled player.php. > > Any help/suggestions as to how to change this so it will see the php > file as a file it passes to php5-fpm. > > ## Web conference alias and flash video settings > location ^~ /media { The "^~" there means that, if this is the most specific prefix-match location, the regex locations are not checked. http://www.nginx.org/en/docs/http/ngx_http_core_module.html#location > root /var/www; > } > > location ~ /media/courses/.*\.php$ { And this one is a regex location. So this is not used at all. Which is why your php script is served as-is, without fastcgi processing. >From what you say above, this regex could be tightened to require /player.php at the end. And maybe to require no / between courses/ and /player.php > root /var/www; > if ($fastcgi_script_name ~ /media/courses(/.*\.php)$) { > set $valid_fastcgi_script_name $1; > } I'm not sure what that part tries to do; it's an "if" inside a "location", so it probably does not do what you expect. http://wiki.nginx.org/IfIsEvil You don't seem to use $valid_fastcgi_script_name anyway. I suspect that if you move the "location ~" block inside the "location ^~" block, you'll either see things Just Work, or at least see some more useful error messages to point you at any other problems. The "root" and "if" parts above are then probably ok to remove. > fastcgi_pass unix:/tmp/phpfpm.sock; > fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; > include fastcgi_params; And the last two lines could become "include fastcgi.conf;", or alternatively $document_root could be used instead of /var/www. Good luck with it, f -- Francis Daly francis at daoine.org From roberto at unbit.it Fri Feb 24 09:23:22 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 10:23:22 +0100 Subject: a new uWSGI PHP plugin is available Message-ID: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> Hi everyone, uWSGI has got a new php plugin, allowing you to run php apps at full speed (read: not in CGI mode) and getting all of the uWSGI features (like adaptive process spawning and jailing technics). In addition to this a bunch of uwsgi api functions has been added, allowing your php apps to interact with other apps hosted in uWSGI. http://projects.unbit.it/uwsgi/wiki/PHP The build system on redhat-based distros (fedora, centos...) is very easy, but for debian/ubuntu you need to rebuild php or use some ppa as debian (still) does not include the libphp. In my company, we are using this plugin proxied behind nginx from about 2 weeks without problems. Every report will be wellcomed. Please, do not ask me to compare it with php-fpm as i have never used it at a level allowing me to make a fair analysis :( -- Roberto De Ioris http://unbit.it From edho at myconan.net Fri Feb 24 10:44:46 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 24 Feb 2012 17:44:46 +0700 Subject: a new uWSGI PHP plugin is available In-Reply-To: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> Message-ID: <4F476A1E.6040905@myconan.net> On 2012-02-24 16:23, Roberto De Ioris wrote: > In my company, we are using this plugin proxied behind nginx from about 2 > weeks without problems. Every report will be wellcomed. > Segfaulted on my server. !!! uWSGI process 31078 got Segmentation Fault !!! *** backtrace of 31078 *** ./uwsgi(uwsgi_backtrace+0x2a) [0x80779ea] ./uwsgi(uwsgi_segfault+0x2c) [0x8077adc] [0x865420] ./uwsgi(wsgi_req_recv+0x7b) [0x805640b] ./uwsgi(simple_loop+0x136) [0x8070dc6] ./uwsgi(uwsgi_ignition+0x196) [0x8074c16] ./uwsgi(uwsgi_start+0x2526) [0x80771a6] ./uwsgi(main+0x1584) [0x807a654] /lib/libc.so.6(__libc_start_main+0xdc) [0xabceac] ./uwsgi [0x8054a91] *** end of backtrace *** DAMN ! worker 3 (pid: 31078) died :( trying respawn ... Respawned uWSGI worker 3 (new pid: 31083) Running: - CentOS 5.7 - PHP 5.3 from IUSCommunity repository (http://iuscommunity.org) - nginx 1.1.15 (self-compiled) - Python 2.4 From roberto at unbit.it Fri Feb 24 10:46:38 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 11:46:38 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: <4F476A1E.6040905@myconan.net> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> <4F476A1E.6040905@myconan.net> Message-ID: <40abdb59b823c0f0e558a4f2c3573a99.squirrel@manage.unbit.it> > On 2012-02-24 16:23, Roberto De Ioris wrote: >> In my company, we are using this plugin proxied behind nginx from about >> 2 >> weeks without problems. Every report will be wellcomed. >> > > Segfaulted on my server. > > !!! uWSGI process 31078 got Segmentation Fault !!! > *** backtrace of 31078 *** > ./uwsgi(uwsgi_backtrace+0x2a) [0x80779ea] > ./uwsgi(uwsgi_segfault+0x2c) [0x8077adc] > [0x865420] > ./uwsgi(wsgi_req_recv+0x7b) [0x805640b] > ./uwsgi(simple_loop+0x136) [0x8070dc6] > ./uwsgi(uwsgi_ignition+0x196) [0x8074c16] > ./uwsgi(uwsgi_start+0x2526) [0x80771a6] > ./uwsgi(main+0x1584) [0x807a654] > /lib/libc.so.6(__libc_start_main+0xdc) [0xabceac] > ./uwsgi [0x8054a91] > *** end of backtrace *** > DAMN ! worker 3 (pid: 31078) died :( trying respawn ... > Respawned uWSGI worker 3 (new pid: 31083) > > Running: > - CentOS 5.7 > - PHP 5.3 from IUSCommunity repository (http://iuscommunity.org) > - nginx 1.1.15 (self-compiled) > - Python 2.4 > > Be sure to fully rebuild (do a make clean) uWSGI if you have added devel packages during php_plugin compilation as they can modify CFLAGS setup -- Roberto De Ioris http://unbit.it From edho at myconan.net Fri Feb 24 11:00:30 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 24 Feb 2012 18:00:30 +0700 Subject: a new uWSGI PHP plugin is available In-Reply-To: <40abdb59b823c0f0e558a4f2c3573a99.squirrel@manage.unbit.it> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> <4F476A1E.6040905@myconan.net> <40abdb59b823c0f0e558a4f2c3573a99.squirrel@manage.unbit.it> Message-ID: <4F476DCE.3090609@myconan.net> On 2012-02-24 17:46, Roberto De Ioris wrote: > Be sure to fully rebuild (do a make clean) uWSGI if you have added > devel packages during php_plugin compilation as they can modify CFLAGS > setup Thank you, it's working fine now. The DOCUMENT_ROOT (or something else) doesn't seem to be passed properly though? I get "Not Found" (uwsgi's?) unless I start uwsgi with --php-docroot parameter. I'm following "Run php apps with nginx as frontend" chapter of the wiki. From roberto at unbit.it Fri Feb 24 11:15:24 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 12:15:24 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: <4F476DCE.3090609@myconan.net> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> <4F476A1E.6040905@myconan.net> <40abdb59b823c0f0e558a4f2c3573a99.squirrel@manage.unbit.it> <4F476DCE.3090609@myconan.net> Message-ID: > On 2012-02-24 17:46, Roberto De Ioris wrote: >> Be sure to fully rebuild (do a make clean) uWSGI if you have added >> devel packages during php_plugin compilation as they can modify CFLAGS >> setup > > Thank you, it's working fine now. The DOCUMENT_ROOT (or something else) > doesn't seem to be passed properly though? I get "Not Found" (uwsgi's?) > unless I start uwsgi with --php-docroot parameter. I'm following "Run > php apps with nginx as frontend" chapter of the wiki. > > you are right, there is a missing variable assignment for docroot, i have released a new snapshot (snapshot5) or you can use the latest tip. Obviously i suggest you to always list allowed docroots in uWSGI for increasing security. -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Fri Feb 24 11:37:13 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 06:37:13 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> Message-ID: <80e564d758111cda858d5708535be674.NginxMailingListEnglish@forum.nginx.org> I tried building this on my Ubuntu 11.10 server with nginx-1.0.12 ./configure \ --prefix=/opt/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_gzip_static_module \ --add-module=/root/sources/uwsgi-1.1-snapshot5/nginx/ \ --user=www-data \ --group=www-data \ --without-mail_pop3_module \ --without-mail_imap_module \ --without-mail_smtp_module but I keep getting this error: objs/addon/nginx/ngx_http_uwsgi_module.o:(.data+0x0): multiple definition of `ngx_http_uwsgi_module' objs/src/http/modules/ngx_http_uwsgi_module.o:(.data+0x0): first defined here collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/root/sources/nginx-1.0.12' make: *** [build] Error 2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222930#msg-222930 From roberto at unbit.it Fri Feb 24 11:39:29 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 12:39:29 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: <80e564d758111cda858d5708535be674.NginxMailingListEnglish@forum.nginx.org> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> <80e564d758111cda858d5708535be674.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I tried building this on my Ubuntu 11.10 server with nginx-1.0.12 > > ./configure \ > --prefix=/opt/nginx \ > --conf-path=/etc/nginx/nginx.conf \ > --pid-path=/var/run/nginx.pid \ > --lock-path=/var/lock/nginx.lock \ > --http-log-path=/var/log/nginx/access.log \ > --error-log-path=/var/log/nginx/error.log \ > --http-client-body-temp-path=/var/lib/nginx/body \ > --http-proxy-temp-path=/var/lib/nginx/proxy \ > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ > --http-scgi-temp-path=/var/lib/nginx/scgi \ > --with-http_stub_status_module \ > --with-http_ssl_module \ > --with-http_gzip_static_module \ > --add-module=/root/sources/uwsgi-1.1-snapshot5/nginx/ \ > --user=www-data \ > --group=www-data \ > --without-mail_pop3_module \ > --without-mail_imap_module \ > --without-mail_smtp_module > > but I keep getting this error: > > objs/addon/nginx/ngx_http_uwsgi_module.o:(.data+0x0): multiple > definition of `ngx_http_uwsgi_module' > objs/src/http/modules/ngx_http_uwsgi_module.o:(.data+0x0): first defined > here > collect2: ld returned 1 exit status > make[1]: *** [objs/nginx] Error 1 > make[1]: Leaving directory `/root/sources/nginx-1.0.12' > make: *** [build] Error 2 > > It looks like you have added the uwsgi patch to nginx, while nginx does not require it starting from 0.8.40 (it is included in the official distribution) -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Fri Feb 24 11:42:05 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 06:42:05 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: References: Message-ID: Thanks for the promp reply, I wasn't aware it's already been included. Look forward to testing it out. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222932#msg-222932 From nginx-forum at nginx.us Fri Feb 24 13:33:11 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 08:33:11 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> Message-ID: <2358ac1eca01af28d325433d5c46e8ad.NginxMailingListEnglish@forum.nginx.org> I'm a bit stuck with 'running the php script in the uWSGI server' My PHP is built from source, as far as I understand I need to build it with --enable-embed ./configure --prefix=/opt/php5 --with-config-file-path=/opt/php5/etc --with-config-file-scan-dir=/opt/php5/etc/conf.d --with-curl --with-mhash --enable-cgi --with-pear --with-gd --with-jpeg-dir --with-png-dir --with-zlib --with-xpm-dir --with-freetype-dir --with-t1lib --with-mcrypt --with-mhash --with-mysql=mysqlnd --with-mysqli=mysqlnd --with-pdo-mysql=mysqlnd --with-openssl --with-xmlrpc --with-xsl --with-bz2 --with-gettext --with-fpm-user=www-data --with-fpm-group=www-data --disable-debug --enable-fpm --enable-exif --enable-wddx --enable-zip --enable-bcmath --enable-calendar --enable-ftp --enable-embed --enable-mbstring --enable-soap --enable-sockets --enable-shmop --enable-dba --enable-inline-optimization --enable-sysvsem --enable-sysvshm --enable-sysvmsg How would I then proceed to build the uWSGI php plugin? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222936#msg-222936 From roberto at unbit.it Fri Feb 24 13:42:28 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 14:42:28 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: <2358ac1eca01af28d325433d5c46e8ad.NginxMailingListEnglish@forum.nginx.org> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> <2358ac1eca01af28d325433d5c46e8ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I'm a bit stuck with 'running the php script in the uWSGI server' > > My PHP is built from source, as far as I understand I need to build it > with --enable-embed > > ./configure --prefix=/opt/php5 --with-config-file-path=/opt/php5/etc > --with-config-file-scan-dir=/opt/php5/etc/conf.d --with-curl > --with-mhash --enable-cgi --with-pear --with-gd --with-jpeg-dir > --with-png-dir --with-zlib --with-xpm-dir --with-freetype-dir > --with-t1lib --with-mcrypt --with-mhash --with-mysql=mysqlnd > --with-mysqli=mysqlnd --with-pdo-mysql=mysqlnd --with-openssl > --with-xmlrpc --with-xsl --with-bz2 --with-gettext > --with-fpm-user=www-data --with-fpm-group=www-data --disable-debug > --enable-fpm --enable-exif --enable-wddx --enable-zip --enable-bcmath > --enable-calendar --enable-ftp --enable-embed --enable-mbstring > --enable-soap --enable-sockets --enable-shmop --enable-dba > --enable-inline-optimization --enable-sysvsem --enable-sysvshm > --enable-sysvmsg > > How would I then proceed to build the uWSGI php plugin? > download 1.1 uWSGI sources: http://projects.unbit.it/downloads/uwsgi-1.1-snapshot5.tar.gz uncompress and move to the resulting directory and run make if all goes well you will end with a binary named 'uwsgi' then build the php plugin with python uwsgiconfig.py --plugin plugins/php finally run (always from the same dir) ./uwsgi -s :3031 --plugins php now point nginx to that port as described here: http://projects.unbit.it/uwsgi/wiki/PHP -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Fri Feb 24 14:08:08 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 09:08:08 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: References: Message-ID: Thanks for all the help so far Roberto. When I run python uwsgiconfig.py --plugin plugins/php using profile: buildconf/default.ini detected include path: ['/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include', '/usr/local/include', '/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include-fixed', '/usr/include/x86_64-linux-gnu', '/usr/include'] *** uWSGI building and linking plugin plugins/php *** [gcc -pthread] ./php_plugin.so /usr/bin/ld: cannot find -lphp5 collect2: ld returned 1 exit status *** unable to build php plugin *** Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222939#msg-222939 From r at roze.lv Fri Feb 24 14:34:28 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 24 Feb 2012 16:34:28 +0200 Subject: a new uWSGI PHP plugin is available In-Reply-To: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> References: <3d48767af03d191288de489a162bea88.squirrel@manage.unbit.it> Message-ID: <2F517BF5F1C04809907A8CA578736D76@DD21> > Please, do not ask me to compare it with php-fpm as i have never used it at a level allowing me to make a fair analysis :( Would love to see some (even simple) benchmarks from someone who has played around both (fpm and the uwsgi). rr From roberto at unbit.it Fri Feb 24 14:38:18 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 15:38:18 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: References: Message-ID: <9b042751b545e7b34ca705f0fc534ac6.squirrel@manage.unbit.it> > Thanks for all the help so far Roberto. > > When I run > > python uwsgiconfig.py --plugin plugins/php > using profile: buildconf/default.ini > detected include path: ['/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include', > '/usr/local/include', > '/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include-fixed', > '/usr/include/x86_64-linux-gnu', '/usr/include'] > *** uWSGI building and linking plugin plugins/php *** > [gcc -pthread] ./php_plugin.so > /usr/bin/ld: cannot find -lphp5 > collect2: ld returned 1 exit status > *** unable to build php plugin *** Your php-config script is not exporting library dir. Use this trick: LDFLAGS="-Lpath" LD_RUN_PATH="path" python uwsgiconfig.py --plugin plugins/php where path is the directory containing libphp5.so > -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Fri Feb 24 14:45:18 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 09:45:18 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: <9b042751b545e7b34ca705f0fc534ac6.squirrel@manage.unbit.it> References: <9b042751b545e7b34ca705f0fc534ac6.squirrel@manage.unbit.it> Message-ID: I've tried that, but same result: LDFLAGS="-Lpath" LD_RUN_PATH="/opt/php5/lib/libphp5.so" python uwsgiconfig.py --plugin plugins/php using profile: buildconf/default.ini detected include path: ['/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include', '/usr/local/include', '/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include-fixed', '/usr/include/x86_64-linux-gnu', '/usr/include'] *** uWSGI building and linking plugin plugins/php *** [gcc -pthread] ./php_plugin.so /usr/bin/ld: cannot find -lphp5 collect2: ld returned 1 exit status *** unable to build php plugin *** Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222943#msg-222943 From roberto at unbit.it Fri Feb 24 14:48:20 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 24 Feb 2012 15:48:20 +0100 Subject: a new uWSGI PHP plugin is available In-Reply-To: References: <9b042751b545e7b34ca705f0fc534ac6.squirrel@manage.unbit.it> Message-ID: <75cf6ed7f4d3b68155a8d11f96d2a33a.squirrel@manage.unbit.it> > I've tried that, but same result: > > LDFLAGS="-Lpath" LD_RUN_PATH="/opt/php5/lib/libphp5.so" python > uwsgiconfig.py --plugin plugins/php it must be LDFLAGS="-L/opt/php5/lib/" LD_RUN_PATH="/opt/php5/lib/" > using profile: buildconf/default.ini > detected include path: ['/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include', > '/usr/local/include', > '/usr/lib/gcc/x86_64-linux-gnu/4.6.1/include-fixed', > '/usr/include/x86_64-linux-gnu', '/usr/include'] > *** uWSGI building and linking plugin plugins/php *** > [gcc -pthread] ./php_plugin.so > /usr/bin/ld: cannot find -lphp5 > collect2: ld returned 1 exit status > *** unable to build php plugin *** > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222924,222943#msg-222943 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Fri Feb 24 15:07:57 2012 From: nginx-forum at nginx.us (athalas) Date: Fri, 24 Feb 2012 10:07:57 -0500 (EST) Subject: a new uWSGI PHP plugin is available In-Reply-To: References: Message-ID: <16be7148d54f1f38b770774dca182883.NginxMailingListEnglish@forum.nginx.org> Thanks Robert - finally have it up and running. Just did some quick testing and seeing 15% improvement in page load times, we've previously been using php-fpm. Looks very promising so far! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222924,222946#msg-222946 From mp3geek at gmail.com Fri Feb 24 23:48:34 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Sat, 25 Feb 2012 12:48:34 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> Message-ID: Any clues on this? On Thu, Feb 23, 2012 at 10:36 AM, Ryan Brown wrote: > Okay, manage to get it to compile, > > make[1]: Entering directory `/root/trunk' > /opt/intel/bin/icc -c -pipe ?-O -W -Wall -Wpointer-arith > -Wno-unused-parameter -Wunused-function -Wunused-variable > -Wunused-value -Werror -g ?-I src/core -I src/event -I > src/event/modules -I src/os/unix -I objs \ > ? ? ? ? ? ? ? ?-o objs/src/core/ngx_string.o \ > ? ? ? ? ? ? ? ?src/core/ngx_string.c > icc: command line warning #10006: ignoring unknown option '-Wunused-value' > src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type > ? ? ?state = 0; > ? ? ? ? ? ?^ > > compilation aborted for src/core/ngx_string.c (code 2) > make[1]: *** [objs/src/core/ngx_string.o] Error 2 From mdounin at mdounin.ru Sat Feb 25 01:09:01 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 25 Feb 2012 05:09:01 +0400 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> Message-ID: <20120225010901.GU67687@mdounin.ru> Hello! On Sat, Feb 25, 2012 at 12:48:34PM +1300, Ryan Brown wrote: > Any clues on this? It looks like you somehow persuaded nginx that your compiler is gcc, and it uses command line arguments appropiate for gcc instead of ones for icc. See auto/cc/icc for a long list of warnings which should be ignored with icc. Maxim Dounin > > On Thu, Feb 23, 2012 at 10:36 AM, Ryan Brown wrote: > > Okay, manage to get it to compile, > > > > make[1]: Entering directory `/root/trunk' > > /opt/intel/bin/icc -c -pipe ?-O -W -Wall -Wpointer-arith > > -Wno-unused-parameter -Wunused-function -Wunused-variable > > -Wunused-value -Werror -g ?-I src/core -I src/event -I > > src/event/modules -I src/os/unix -I objs \ > > ? ? ? ? ? ? ? ?-o objs/src/core/ngx_string.o \ > > ? ? ? ? ? ? ? ?src/core/ngx_string.c > > icc: command line warning #10006: ignoring unknown option '-Wunused-value' > > src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type > > ? ? ?state = 0; > > ? ? ? ? ? ?^ > > > > compilation aborted for src/core/ngx_string.c (code 2) > > make[1]: *** [objs/src/core/ngx_string.o] Error 2 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mp3geek at gmail.com Sat Feb 25 01:22:36 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Sat, 25 Feb 2012 14:22:36 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: <20120225010901.GU67687@mdounin.ru> References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> Message-ID: Though it actually compiles to a point then errors... (ignoring the -Wunused-value warnings) [root at bob:~/trunk]# export | grep cc CC='icc -I/usr/include/i386-linux-gnu/' LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 cc=icc [root at bob:~/trunk]# icc --version icc (ICC) 12.1.3 20120212 Copyright (C) 1985-2012 Intel Corporation. All rights reserved. cc: command line warning #10006: ignoring unknown option '-Wunused-value' icc -I/usr/include/i386-linux-gnu/ -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_hash.o \ src/core/ngx_hash.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' icc -I/usr/include/i386-linux-gnu/ -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_buf.o \ src/core/ngx_buf.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' icc -I/usr/include/i386-linux-gnu/ -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_queue.o \ src/core/ngx_queue.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' icc -I/usr/include/i386-linux-gnu/ -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_output_chain.o \ src/core/ngx_output_chain.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' icc -I/usr/include/i386-linux-gnu/ -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/core/ngx_string.o \ src/core/ngx_string.c icc: command line warning #10006: ignoring unknown option '-Wunused-value' src/core/ngx_string.c(1519): error: identifier "bool" is undefined (bool) state = 0; ^ src/core/ngx_string.c(1519): error: expected a ";" (bool) state = 0; ^ compilation aborted for src/core/ngx_string.c (code 2) make[1]: *** [objs/src/core/ngx_string.o] Error 2 make[1]: Leaving directory `/root/trunk' make: *** [build] Error 2 from a ./configure [root at bob:~/trunk]# ./configure checking for OS + Linux 3.2.5-ck1 i686 checking for C compiler ... found + using GNU C compiler + gcc version: 4.6.0 compatibility) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found But when it comes to compiling it uses icc (as above), the machine does have gcc-4.6 installed also I didn't see any error messages in auto/cc/icc, I did noticed it reference icc up to 11.x but not 12? 2012/2/25 Maxim Dounin : > Hello! > > On Sat, Feb 25, 2012 at 12:48:34PM +1300, Ryan Brown wrote: > >> Any clues on this? > > It looks like you somehow persuaded nginx that your compiler is > gcc, and it uses command line arguments appropiate for gcc instead > of ones for icc. ?See auto/cc/icc for a long list of warnings which > should be ignored with icc. > > Maxim Dounin > >> >> On Thu, Feb 23, 2012 at 10:36 AM, Ryan Brown wrote: >> > Okay, manage to get it to compile, >> > >> > make[1]: Entering directory `/root/trunk' >> > /opt/intel/bin/icc -c -pipe ?-O -W -Wall -Wpointer-arith >> > -Wno-unused-parameter -Wunused-function -Wunused-variable >> > -Wunused-value -Werror -g ?-I src/core -I src/event -I >> > src/event/modules -I src/os/unix -I objs \ >> > ? ? ? ? ? ? ? ?-o objs/src/core/ngx_string.o \ >> > ? ? ? ? ? ? ? ?src/core/ngx_string.c >> > icc: command line warning #10006: ignoring unknown option '-Wunused-value' >> > src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type >> > ? ? ?state = 0; >> > ? ? ? ? ? ?^ >> > >> > compilation aborted for src/core/ngx_string.c (code 2) >> > make[1]: *** [objs/src/core/ngx_string.o] Error 2 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sat Feb 25 02:06:28 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 25 Feb 2012 06:06:28 +0400 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> Message-ID: <20120225020628.GW67687@mdounin.ru> Hello! On Sat, Feb 25, 2012 at 02:22:36PM +1300, Ryan Brown wrote: > Though it actually compiles to a point then errors... (ignoring the > -Wunused-value warnings) > > [root at bob:~/trunk]# export | grep cc > CC='icc -I/usr/include/i386-linux-gnu/' > LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 > LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 > cc=icc > [root at bob:~/trunk]# icc --version > icc (ICC) 12.1.3 20120212 > Copyright (C) 1985-2012 Intel Corporation. All rights reserved. Could you please show "icc -v" and "icc -V" output? [...] > -I src/event/modules -I src/os/unix -I objs \ > -o objs/src/core/ngx_string.o \ > src/core/ngx_string.c > icc: command line warning #10006: ignoring unknown option '-Wunused-value' > src/core/ngx_string.c(1519): error: identifier "bool" is undefined > (bool) state = 0; > ^ > > src/core/ngx_string.c(1519): error: expected a ";" > (bool) state = 0; > ^ Note: this is *not* original nginx code. There is no "(bool)" cast in nginx here, just plain assignment. With original code you'll get "error #188: enumerated type mixed with another type" as in your previous message, and this warning is expected to be disabled with icc, see auto/cc/icc: ... # enumerated type mixed with another type CFLAGS="$CFLAGS -wd188" ... > from a ./configure > > [root at bob:~/trunk]# ./configure > checking for OS > + Linux 3.2.5-ck1 i686 > checking for C compiler ... found > + using GNU C compiler > + gcc version: 4.6.0 compatibility) It looks like newer icc pretend to be compatible with gcc somewhere in "icc -v" output, and nginx incorrectly mishandles it as gcc as a result. Something like this should help: diff --git a/auto/cc/name b/auto/cc/name --- a/auto/cc/name +++ b/auto/cc/name @@ -64,16 +64,16 @@ if [ "$CC" = bcc32 ]; then echo " + using Borland C++ compiler" else +if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then + NGX_CC_NAME=icc + echo " + using Intel C++ compiler" + +else if `$CC -v 2>&1 | grep 'gcc version' >/dev/null 2>&1`; then NGX_CC_NAME=gcc echo " + using GNU C compiler" else -if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then - NGX_CC_NAME=icc - echo " + using Intel C++ compiler" - -else if `$CC -V 2>&1 | grep 'Sun C' >/dev/null 2>&1`; then NGX_CC_NAME=sunc echo " + using Sun C compiler" Alternatively, you may set CFLAGS in the environment yourself, this will prevent nginx from setting them by it's own. Something like CFLAGS="-W -g" ./configure should work. > But when it comes to compiling it uses icc (as above), the machine > does have gcc-4.6 installed also > > I didn't see any error messages in auto/cc/icc, I did noticed it > reference icc up to 11.x but not 12? We don't generally use icc, though occasionally check compilation if found one available. There were no tests with 12.x yet, though it's likely will be ok once correctly detected. Maxim Dounin > > > 2012/2/25 Maxim Dounin : > > Hello! > > > > On Sat, Feb 25, 2012 at 12:48:34PM +1300, Ryan Brown wrote: > > > >> Any clues on this? > > > > It looks like you somehow persuaded nginx that your compiler is > > gcc, and it uses command line arguments appropiate for gcc instead > > of ones for icc. ?See auto/cc/icc for a long list of warnings which > > should be ignored with icc. > > > > Maxim Dounin > > > >> > >> On Thu, Feb 23, 2012 at 10:36 AM, Ryan Brown wrote: > >> > Okay, manage to get it to compile, > >> > > >> > make[1]: Entering directory `/root/trunk' > >> > /opt/intel/bin/icc -c -pipe ?-O -W -Wall -Wpointer-arith > >> > -Wno-unused-parameter -Wunused-function -Wunused-variable > >> > -Wunused-value -Werror -g ?-I src/core -I src/event -I > >> > src/event/modules -I src/os/unix -I objs \ > >> > ? ? ? ? ? ? ? ?-o objs/src/core/ngx_string.o \ > >> > ? ? ? ? ? ? ? ?src/core/ngx_string.c > >> > icc: command line warning #10006: ignoring unknown option '-Wunused-value' > >> > src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type > >> > ? ? ?state = 0; > >> > ? ? ? ? ? ?^ > >> > > >> > compilation aborted for src/core/ngx_string.c (code 2) > >> > make[1]: *** [objs/src/core/ngx_string.o] Error 2 > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mp3geek at gmail.com Sat Feb 25 03:17:18 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Sat, 25 Feb 2012 16:17:18 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: <20120225020628.GW67687@mdounin.ru> References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> <20120225020628.GW67687@mdounin.ru> Message-ID: Will try your suggestions.. [root at bob:auto/cc]# icc -v icc version 12.1.3 (gcc version 4.6.0 compatibility) [root at bob:auto/cc]# icc -V Intel(R) C Compiler XE for applications running on IA-32, Version 12.1.3.293 Build 20120212 Copyright (C) 1985-2012 Intel Corporation. All rights reserved. FOR NON-COMMERCIAL USE ONL 2012/2/25 Maxim Dounin : > Hello! > > On Sat, Feb 25, 2012 at 02:22:36PM +1300, Ryan Brown wrote: > >> Though it actually compiles to a point then errors... (ignoring the >> -Wunused-value warnings) >> >> [root at bob:~/trunk]# export | grep cc >> CC='icc ?-I/usr/include/i386-linux-gnu/' >> LD_LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21:/opt/intel/composer_xe_2011_sp1.9.293/debugger/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/lib/ia32 >> LIBRARY_PATH=/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/../compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32:/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/ia32//cc4.1.0_libc2.4_kernel2.6.16.21 >> cc=icc >> [root at bob:~/trunk]# icc --version >> icc (ICC) 12.1.3 20120212 >> Copyright (C) 1985-2012 Intel Corporation. ?All rights reserved. > > Could you please show "icc -v" and "icc -V" output? > > [...] > >> -I src/event/modules -I src/os/unix -I objs \ >> ? ? ? ? ? ? ? ? -o objs/src/core/ngx_string.o \ >> ? ? ? ? ? ? ? ? src/core/ngx_string.c >> icc: command line warning #10006: ignoring unknown option '-Wunused-value' >> src/core/ngx_string.c(1519): error: identifier "bool" is undefined >> ? ? ? (bool) state = 0; >> ? ? ? ?^ >> >> src/core/ngx_string.c(1519): error: expected a ";" >> ? ? ? (bool) state = 0; >> ? ? ? ? ? ? ?^ > > Note: this is *not* original nginx code. ?There is no "(bool)" > cast in nginx here, just plain assignment. > > With original code you'll get "error #188: enumerated type mixed > with another type" as in your previous message, and this warning > is expected to be disabled with icc, see auto/cc/icc: > > ... > # enumerated type mixed with another type > CFLAGS="$CFLAGS -wd188" > ... > >> from a ./configure >> >> [root at bob:~/trunk]# ./configure >> checking for OS >> ?+ Linux 3.2.5-ck1 i686 >> checking for C compiler ... found >> ?+ using GNU C compiler >> ?+ gcc version: 4.6.0 compatibility) > > It looks like newer icc pretend to be compatible with gcc > somewhere in "icc -v" output, and nginx incorrectly mishandles it > as gcc as a result. > > Something like this should help: > > diff --git a/auto/cc/name b/auto/cc/name > --- a/auto/cc/name > +++ b/auto/cc/name > @@ -64,16 +64,16 @@ if [ "$CC" = bcc32 ]; then > ? ? echo " + using Borland C++ compiler" > > ?else > +if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then > + ? ?NGX_CC_NAME=icc > + ? ?echo " + using Intel C++ compiler" > + > +else > ?if `$CC -v 2>&1 | grep 'gcc version' >/dev/null 2>&1`; then > ? ? NGX_CC_NAME=gcc > ? ? echo " + using GNU C compiler" > > ?else > -if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then > - ? ?NGX_CC_NAME=icc > - ? ?echo " + using Intel C++ compiler" > - > -else > ?if `$CC -V 2>&1 | grep 'Sun C' >/dev/null 2>&1`; then > ? ? NGX_CC_NAME=sunc > ? ? echo " + using Sun C compiler" > > > Alternatively, you may set CFLAGS in the environment yourself, > this will prevent nginx from setting them by it's own. ?Something > like > > CFLAGS="-W -g" ./configure > > should work. > >> But when it comes to compiling it uses icc (as above), the machine >> does have gcc-4.6 installed also >> >> I didn't see any error messages in auto/cc/icc, I did noticed it >> reference icc up to 11.x but not 12? > > We don't generally use icc, though occasionally check compilation > if found one available. ?There were no tests with 12.x yet, though > it's likely will be ok once correctly detected. > > Maxim Dounin > >> >> >> 2012/2/25 Maxim Dounin : >> > Hello! >> > >> > On Sat, Feb 25, 2012 at 12:48:34PM +1300, Ryan Brown wrote: >> > >> >> Any clues on this? >> > >> > It looks like you somehow persuaded nginx that your compiler is >> > gcc, and it uses command line arguments appropiate for gcc instead >> > of ones for icc. ?See auto/cc/icc for a long list of warnings which >> > should be ignored with icc. >> > >> > Maxim Dounin >> > >> >> >> >> On Thu, Feb 23, 2012 at 10:36 AM, Ryan Brown wrote: >> >> > Okay, manage to get it to compile, >> >> > >> >> > make[1]: Entering directory `/root/trunk' >> >> > /opt/intel/bin/icc -c -pipe ?-O -W -Wall -Wpointer-arith >> >> > -Wno-unused-parameter -Wunused-function -Wunused-variable >> >> > -Wunused-value -Werror -g ?-I src/core -I src/event -I >> >> > src/event/modules -I src/os/unix -I objs \ >> >> > ? ? ? ? ? ? ? ?-o objs/src/core/ngx_string.o \ >> >> > ? ? ? ? ? ? ? ?src/core/ngx_string.c >> >> > icc: command line warning #10006: ignoring unknown option '-Wunused-value' >> >> > src/core/ngx_string.c(1519): error #188: enumerated type mixed with another type >> >> > ? ? ?state = 0; >> >> > ? ? ? ? ? ?^ >> >> > >> >> > compilation aborted for src/core/ngx_string.c (code 2) >> >> > make[1]: *** [objs/src/core/ngx_string.o] Error 2 >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mp3geek at gmail.com Sat Feb 25 04:28:58 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Sat, 25 Feb 2012 17:28:58 +1300 Subject: Using nginx 1.1 with the intel compiler In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> <20120225020628.GW67687@mdounin.ru> Message-ID: Okay, it managed to compile with your suggested patch, its safe to push into trunk :) nginx-dev+openssl-dev compiled under icc and working >> diff --git a/auto/cc/name b/auto/cc/name >> --- a/auto/cc/name >> +++ b/auto/cc/name >> @@ -64,16 +64,16 @@ if [ "$CC" = bcc32 ]; then >> ? ? echo " + using Borland C++ compiler" >> >> ?else >> +if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then >> + ? ?NGX_CC_NAME=icc >> + ? ?echo " + using Intel C++ compiler" >> + >> +else >> ?if `$CC -v 2>&1 | grep 'gcc version' >/dev/null 2>&1`; then >> ? ? NGX_CC_NAME=gcc >> ? ? echo " + using GNU C compiler" >> >> ?else >> -if `$CC -V 2>&1 | grep '^Intel(R) C' >/dev/null 2>&1`; then >> - ? ?NGX_CC_NAME=icc >> - ? ?echo " + using Intel C++ compiler" >> - >> -else >> ?if `$CC -V 2>&1 | grep 'Sun C' >/dev/null 2>&1`; then >> ? ? NGX_CC_NAME=sunc >> ? ? echo " + using Sun C compiler" >> From maxim at nginx.com Sat Feb 25 13:01:40 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 25 Feb 2012 17:01:40 +0400 Subject: icc access (was Re: Using nginx 1.1 with the intel compiler) In-Reply-To: References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> <20120225020628.GW67687@mdounin.ru> Message-ID: <4F48DBB4.1090807@nginx.com> Generally speaking, we (nginx team) will be grateful for an access to a host with Intel compiler suit (Intel Parallel Studio XE or Composer XE) with an appropriate license there. Or perhaps someone has a right contact in Intel to talk about that. On 2/25/12 8:28 AM, Ryan Brown wrote: > Okay, it managed to compile with your suggested patch, its safe > to push into trunk :) > > nginx-dev+openssl-dev compiled under icc and working > > >>> diff --git a/auto/cc/name b/auto/cc/name --- a/auto/cc/name >>> +++ b/auto/cc/name @@ -64,16 +64,16 @@ if [ "$CC" = bcc32 ]; >>> then echo " + using Borland C++ compiler" >>> >>> else +if `$CC -V 2>&1 | grep '^Intel(R) C'>/dev/null 2>&1`; >>> then + NGX_CC_NAME=icc + echo " + using Intel C++ >>> compiler" + +else if `$CC -v 2>&1 | grep 'gcc >>> version'>/dev/null 2>&1`; then NGX_CC_NAME=gcc echo " + >>> using GNU C compiler" >>> >>> else -if `$CC -V 2>&1 | grep '^Intel(R) C'>/dev/null 2>&1`; >>> then - NGX_CC_NAME=icc - echo " + using Intel C++ >>> compiler" - -else if `$CC -V 2>&1 | grep 'Sun C'>/dev/null >>> 2>&1`; then NGX_CC_NAME=sunc echo " + using Sun C compiler" >>> > > _______________________________________________ nginx mailing > list nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/ From mp3geek at gmail.com Sat Feb 25 13:56:44 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Sun, 26 Feb 2012 02:56:44 +1300 Subject: icc access (was Re: Using nginx 1.1 with the intel compiler) In-Reply-To: <4F48DBB4.1090807@nginx.com> References: <20120222134556.GV67687@mdounin.ru> <20120225010901.GU67687@mdounin.ru> <20120225020628.GW67687@mdounin.ru> <4F48DBB4.1090807@nginx.com> Message-ID: You're free to download/install the icc, Installs /opt/intel by default. http://software.intel.com/en-us/articles/intel-software-evaluation-center/ But looking at the suggested patch, it looks safe? just changing the order? On Sun, Feb 26, 2012 at 2:01 AM, Maxim Konovalov wrote: > Generally speaking, we (nginx team) will be grateful for an access to a > host with Intel compiler suit (Intel Parallel Studio XE or Composer > XE) with an appropriate license there. > > Or perhaps someone has a right contact in Intel to talk about that. > > On 2/25/12 8:28 AM, Ryan Brown wrote: >> >> Okay, it managed to compile with your suggested patch, its safe >> to push into trunk :) >> >> nginx-dev+openssl-dev compiled under icc and working >> >> >>>> diff --git a/auto/cc/name b/auto/cc/name --- a/auto/cc/name >>>> +++ b/auto/cc/name @@ -64,16 +64,16 @@ if [ "$CC" = bcc32 ]; >>>> then echo " + using Borland C++ compiler" >>>> >>>> else +if `$CC -V 2>&1 | grep '^Intel(R) C'>/dev/null 2>&1`; >>>> then + ? ?NGX_CC_NAME=icc + ? ?echo " + using Intel C++ >>>> compiler" + +else if `$CC -v 2>&1 | grep 'gcc >>>> version'>/dev/null 2>&1`; then NGX_CC_NAME=gcc echo " + >>>> using GNU C compiler" >>>> >>>> else -if `$CC -V 2>&1 | grep '^Intel(R) C'>/dev/null 2>&1`; >>>> then - ? ?NGX_CC_NAME=icc - ? ?echo " + using Intel C++ >>>> compiler" - -else if `$CC -V 2>&1 | grep 'Sun C'>/dev/null >>>> 2>&1`; then NGX_CC_NAME=sunc echo " + using Sun C compiler" >>>> >> >> _______________________________________________ nginx mailing >> list nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > Maxim Konovalov > +7 (910) 4293178 > http://nginx.com/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Feb 26 00:02:15 2012 From: nginx-forum at nginx.us (edo888) Date: Sat, 25 Feb 2012 19:02:15 -0500 (EST) Subject: How to merge subrequest header. In-Reply-To: References: Message-ID: <0b631a50230fc5a43ff193064ff6539a.NginxMailingListEnglish@forum.nginx.org> Hi, Can you please share some piece of code. I want to make a post subrequest to send the response body to fastcgi and filter it there. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,152939,222982#msg-222982 From nginx-forum at nginx.us Sun Feb 26 00:06:26 2012 From: nginx-forum at nginx.us (edo888) Date: Sat, 25 Feb 2012 19:06:26 -0500 (EST) Subject: How to make a post subrequest with response body from parent?. Message-ID: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> Hi, I want to make a post subrequest to fastcgi with parent's response body to filter it there and return to client. I cannot find any manual about making a post subrequests. Can you share some basic code with explanation? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222983,222983#msg-222983 From nginx-forum at nginx.us Sun Feb 26 03:59:50 2012 From: nginx-forum at nginx.us (altiamge) Date: Sat, 25 Feb 2012 22:59:50 -0500 (EST) Subject: Regular Expression global redirect Message-ID: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> I'm using nginx as a reverse proxy for about 2000 websites. I'm trying to find a good way to redirect all www traffic to nonwww addresses. I don't want to have a separate entry for every domain...just a global redirect in the server block preferably. I found lots of examples to do this one domain at a time, but does anyone have any suggestions on how to do it for the whole server? I was thinking of extracting the domain something like this then using an if statement, but I understand that if's are not recommended: server_name ~^(\.)?(?.+)$; thanks, altimage here's my server block: server { listen 80; server_name _; location / { proxy_pass http://websites; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222984,222984#msg-222984 From appa at perusio.net Sun Feb 26 04:41:08 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 26 Feb 2012 05:41:08 +0100 Subject: Regular Expression global redirect In-Reply-To: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87ehti5hqz.wl%appa@perusio.net> On 26 Fev 2012 04h59 CET, nginx-forum at nginx.us wrote: > I'm using nginx as a reverse proxy for about 2000 websites. I'm > trying to find a good way to redirect all www traffic to nonwww > addresses. I don't want to have a separate entry for every > domain...just a global redirect in the server block preferably. I > found lots of examples to do this one domain at a time, but does > anyone have any suggestions on how to do it for the whole server? > > I was thinking of extracting the domain something like this then > using an if statement, but I understand that if's are not > recommended: > > server_name ~^(\.)?(?.+)$; > > thanks, > altimage > > here's my server block: > > server { > listen 80; > server_name _; > > location / { > proxy_pass http://websites; > } > } Try: server { server_name ^~www\.(?.*)$; return 301 http://$domain; } server { server_name ^~(?[^\.]*)\.(?[^\.]*)$; location / { proxy_pass http://$domain_name.$tld; } } --- appa From nginx-forum at nginx.us Sun Feb 26 05:13:31 2012 From: nginx-forum at nginx.us (tinkutalking) Date: Sun, 26 Feb 2012 00:13:31 -0500 (EST) Subject: How to Cache dynamic content using Nginx when sessions are involved?How to Cache dynamic content using Nginx when sessions are involved? How to Cache dynamic content using Nginx when sessions are involved? How to Cache dynamic content using Nginx when ses Message-ID: This site explains how to create static files from dynamic content using Nginx. http://mark.ossdl.de/2009/07/nginx-to-create-static-files-from-dynamic-content/ My question is this: can I achieve the same if login sessions are involved. ie. when I want to serve content to only registered users and not otherwise. So how to overcome sessions when it comes to caching and finally to use the cache next time for another session? Detailed scenario: The goal of my website is to serve content to only registered users. There are plenty of users logged in, each having different session IDs. A php page queries the DB and finds "XYZ" that user "A" wants and generates HTML output. Now if user "B" (with a different session ID) after sometime wants the same "XYZ", how to make Nginx to deliver from cache without making the php page to query the db again. Has anybody done this before? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222987,222987#msg-222987 From edho at myconan.net Sun Feb 26 06:10:00 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 26 Feb 2012 13:10:00 +0700 Subject: Regular Expression global redirect In-Reply-To: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4F49CCB8.8040407@myconan.net> On 2012-02-26 10:59, altiamge wrote: > I'm using nginx as a reverse proxy for about 2000 websites. I'm trying > to find a good way to redirect all www traffic to nonwww addresses. I > don't want to have a separate entry for every domain...just a global > redirect in the server block preferably. I found lots of examples to do > this one domain at a time, but does anyone have any suggestions on how > to do it for the whole server? > This is what I'm using: server { listen 80; server_name ~^www\.(?.+)$; rewrite ^ $scheme://$domain$request_uri? permanent; } From edho at myconan.net Sun Feb 26 06:19:25 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 26 Feb 2012 13:19:25 +0700 Subject: Regular Expression global redirect In-Reply-To: <87ehti5hqz.wl%appa@perusio.net> References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> <87ehti5hqz.wl%appa@perusio.net> Message-ID: 2012/2/26 Ant?nio P. P. Almeida : > server { > ? ?server_name ^~www\.(?.*)$; > ? ?return 301 http://$domain; > } > Where can I read the documentation for this? It doesn't seem to be mentioned in nginx.org docs and nginx wiki From nginx-forum at nginx.us Sun Feb 26 07:27:12 2012 From: nginx-forum at nginx.us (altiamge) Date: Sun, 26 Feb 2012 02:27:12 -0500 (EST) Subject: Regular Expression global redirect In-Reply-To: <87ehti5hqz.wl%appa@perusio.net> References: <87ehti5hqz.wl%appa@perusio.net> Message-ID: <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> I'm not able to get either one of these to work. I just upgraded to nginx 1.0.12 just to make sure my version wasn't an issue. I also checked my PCRE version. # pcretest PCRE version 6.6 06-Feb-2006 Here are the errors I'm getting with each example: Example 1 ----------------------------------------------- server { listen 80; server_name ~^www\.(?.+)$; rewrite ^ $scheme://$domain$request_uri? permanent; } ## Error: [emerg] pcre_compile() failed: unrecognized character after (?< in "^www\.(?.+)$" at "domain>.+)$" Example 2 ------------------------------------------------ server { server_name ^~www\.(?.*)$; return 301 http://$domain; } server { server_name ^~(?[^\.]*)\.(?[^\.]*)$; location / { proxy_pass http://websites; } } ## Error: nginx: [emerg] unknown "domain" variable thanks, altimage Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222984,222990#msg-222990 From edho at myconan.net Sun Feb 26 07:29:02 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 26 Feb 2012 14:29:02 +0700 Subject: Regular Expression global redirect In-Reply-To: <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> References: <87ehti5hqz.wl%appa@perusio.net> <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: 2012/2/26 altiamge : > I'm not able to get either one of these to work. I just upgraded to > nginx 1.0.12 just to make sure my version wasn't an issue. I also > checked my PCRE version. > > # pcretest > PCRE version 6.6 06-Feb-2006 > Your pcre is too old. I believe the workaround is by appending P before the capture name. server_name ^~www\.(?P.*)$; From varia at e-healthexpert.org Sun Feb 26 12:18:41 2012 From: varia at e-healthexpert.org (Mark Alan) Date: Sun, 26 Feb 2012 12:18:41 +0000 Subject: Regular Expression global redirect In-Reply-To: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120226121841.0db1fefe@e-healthexpert.org> On Sat, 25 Feb 2012 22:59:50 -0500 (EST), "altiamge" wrote: > I'm using nginx as a reverse proxy for about 2000 websites. I'm trying > to find a good way to redirect all www traffic to nonwww addresses. > here's my server block: > > server { > listen 80; > server_name _; > > location / { > proxy_pass http://websites; > } > } Would this help? For older PCRE's: # for http server { listen 80; server_name ~^www\.(?P.+)$; return 301 $scheme://$domain$request_uri; } #for https (change 'sslcert' for your own certificate name) server { listen 443 ssl; server_name ~^www\.(?P.+)$; ssl_certificate /etc/ssl/certs/sslcert.crt; ssl_certificate_key /etc/ssl/private/sslcert.key; return 301 $scheme://$domain$request_uri; } For newer PCRE's: Instead of ?P use ? # Note: in 'return XXX' 301 is like rewrite...permanent and 302 like rewrite...redirect M. From appa at perusio.net Sun Feb 26 14:23:11 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 26 Feb 2012 15:23:11 +0100 Subject: How to Cache dynamic content using Nginx when sessions are involved?How to Cache dynamic content using Nginx when sessions are involved? How to Cache dynamic content using Nginx when sessions are involved? How to Cache dynamic content using Nginx when ses In-Reply-To: References: Message-ID: <87d39165dc.wl%appa@perusio.net> On 26 Fev 2012 06h13 CET, nginx-forum at nginx.us wrote: > This site explains how to create static files from dynamic content > using Nginx. > http://mark.ossdl.de/2009/07/nginx-to-create-static-files-from-dynamic-content/ > > My question is this: can I achieve the same if login sessions are > involved. ie. when I want to serve content to only registered users > and not otherwise. So how to overcome sessions when it comes to > caching and finally to use the cache next time for another session? > > Detailed scenario: > > The goal of my website is to serve content to only registered users. > > There are plenty of users logged in, each having different session > IDs. > > A php page queries the DB and finds "XYZ" that user "A" wants and > generates HTML output. > > Now if user "B" (with a different session ID) after sometime wants > the same "XYZ", how to make Nginx to deliver from cache without > making the php page to query the db again. You mean you want registered users to be served the same page regardless of their respective session ID? > Has anybody done this before? Thanks in advance. If that's the case then it's simple. From the given example: At the http level: map $uri $cache_uri { default $uri; / index.html; } At the server level: location ~ ^/200[0-8]/[01][0-9]/ { root /tmp/nginx/blog/fetch; expires 30d; error_page 403 404 = /fetch$uri; } location ^~ /fetch/ { internal; proxy_pass http://127.0.0.1:80; proxy_store /tmp/nginx/blog$cache_uri; proxy_store_access user:rw group:rw all:r; } --- appa From appa at perusio.net Sun Feb 26 14:39:38 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Sun, 26 Feb 2012 15:39:38 +0100 Subject: Regular Expression global redirect In-Reply-To: References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> <87ehti5hqz.wl%appa@perusio.net> Message-ID: <87bool64lx.wl%appa@perusio.net> On 26 Fev 2012 07h19 CET, edho at myconan.net wrote: > 2012/2/26 Ant?nio P. P. Almeida : >> server { >> ? ?server_name ^~www\.(?.*)$; >> ? ?return 301 http://$domain; >> } >> > > Where can I read the documentation for this? It doesn't seem to be > mentioned in nginx.org docs and nginx wiki AFAIK it's undocumented. You can use return for a lot of things. I hardly ever use rewrite anymore. There are situations where it still applies, but they're not the majority. IMHO using return is more Nginx like, while rewrite harks back to Apache's mod_rewrite and its "reverse" logic. Using return you can make a poorman's web service, for example: location /ws-test { return 200 "{uri: $uri, 'service name': 'this is a service'}\n"; } If you do a capture in the location you can use the captures in the URI you give return as the second argument. The default status is 302. AFAIK it doesn't support named locations redirects. Hence the usual idiom of returning an error status and then using error_page for the redirect with a named location. It was late and I forgot the $request_uri :( Also for old PCRE versions the ? has to be replaced by ?P. Both things that were already addressed in the thread. --- appa From edho at myconan.net Sun Feb 26 14:42:21 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 26 Feb 2012 21:42:21 +0700 Subject: Regular Expression global redirect In-Reply-To: <87bool64lx.wl%appa@perusio.net> References: <028bfd3cf29141809bf5b5b537dc0112.NginxMailingListEnglish@forum.nginx.org> <87ehti5hqz.wl%appa@perusio.net> <87bool64lx.wl%appa@perusio.net> Message-ID: 2012/2/26 Ant?nio P. P. Almeida : > > location /ws-test { > ? ?return 200 "{uri: $uri, 'service name': 'this is a service'}\n"; > } > Is this some kind of magic :O where's the documentation :( From gpakosz at yahoo.fr Sun Feb 26 18:45:54 2012 From: gpakosz at yahoo.fr (=?ISO-8859-1?Q?Gr=E9gory_Pakosz?=) Date: Sun, 26 Feb 2012 19:45:54 +0100 Subject: error_page directive, how does context affect error handling behavior? Message-ID: Hello, I'm using nginx 1.1.14 on Debian Squeeze. In /etc/nginx.conf I wrote: error_page 404 http://google.com; That error_page directive is written in the http context. So far so good, when I visit http://mydomain.com/foo.txt and foo.txt doesn't exist in root's folder, I'm getting redirected to http://google.com as per the 404 error handling. However, if in my server context, inside /etc/sites-available/default, I write: error_page 418 http://nginx.org; Then 404 error handling from http's context doesn't work anymore. When I visit http://mydomain.com/foo.txt I'm not redirected to http://google.com as before but I see nginx's default 404 error page "404 Not found". here is my nginx.conf: http://pastebin.com/5nffYFvK and my default site conf: http://pastebin.com/CYcEFnwx Is there something I overlooked about nginx's error_page directive? Thank you in advance for any help. Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpakosz at yahoo.fr Sun Feb 26 19:43:07 2012 From: gpakosz at yahoo.fr (=?ISO-8859-1?Q?Gr=E9gory_Pakosz?=) Date: Sun, 26 Feb 2012 20:43:07 +0100 Subject: custom 404 error_page seems to conflict with access_log off Message-ID: Hello, In my server block, I configured a custom 404 error page and I tried to disable access log for /favicon.ico error_page 404 /404.html; location = /favicon.ico { access_log off; } http://pastebin.com/39qXWuq2 It seems that both conflicts. When favicon.ico is present: curl -I http://mydomain.com/favicon.ico reports 200 status code and nothing gets logged into my access log When favicon.ico is missing: curl -I http://mydomain.com/favicon.ico reports 404 status code curl http://mydomain.com/favicon.ico displays my custom 404 html page and strangely the 404 error gets logged into my access log: "HEAD /favicon.ico HTTP/1.1" 404 0 "-" "curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3" When I comment out #error_page 404 /404.html; when favicon.ico is missing: curl -I http://mydomain.com/favicon.ico reports 404 status code curl http://mydomain.com/favicon.ico displays nginx's default 404 page nothing gets logged into my access log Can someone explain me this behavior? Thank you, G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Feb 26 20:57:25 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Feb 2012 00:57:25 +0400 Subject: custom 404 error_page seems to conflict with access_log off In-Reply-To: References: Message-ID: <20120226205725.GA67687@mdounin.ru> Hello! On Sun, Feb 26, 2012 at 08:43:07PM +0100, Gr?gory Pakosz wrote: > Hello, > > In my server block, I configured a custom 404 error page and I tried to > disable access log for /favicon.ico > > error_page 404 /404.html; > location = /favicon.ico { > access_log off; > } > > http://pastebin.com/39qXWuq2 > > It seems that both conflicts. > > When favicon.ico is present: > curl -I http://mydomain.com/favicon.ico reports 200 status code and nothing > gets logged into my access log > > When favicon.ico is missing: > curl -I http://mydomain.com/favicon.ico reports 404 status code > curl http://mydomain.com/favicon.ico displays my custom 404 html page > and strangely the 404 error gets logged into my access log: "HEAD > /favicon.ico HTTP/1.1" 404 0 "-" "curl/7.19.7 (universal-apple-darwin10.0) > libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3" > > When I comment out #error_page 404 /404.html; when favicon.ico is missing: > curl -I http://mydomain.com/favicon.ico reports 404 status code > curl http://mydomain.com/favicon.ico displays nginx's default 404 page > nothing gets logged into my access log > > Can someone explain me this behavior? Requests are logged in a context of a location where processing ends. That is, if you have 404 error_page configured requests to a missing favicon.ico file are internally redirected to /404.html, and handled in an appropriate location, not in location = /favicon.ico where you have access_log switched off. Maxim Dounin From nginxyz at mail.ru Sun Feb 26 22:15:19 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 27 Feb 2012 02:15:19 +0400 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: References: Message-ID: 26 ??????? 2012, 22:49 ?? Gr?gory Pakosz : > Hello, > > I'm using nginx 1.1.14 on Debian Squeeze. > > In /etc/nginx.conf I wrote: > error_page 404 http://google.com; > > That error_page directive is written in the http context. > > So far so good, when I visit http://mydomain.com/foo.txt and foo.txt > doesn't exist in root's folder, I'm getting redirected to http://google.com as > per the 404 error handling. > > However, if in my server context, inside /etc/sites-available/default, I > write: > error_page 418 http://nginx.org; > > Then 404 error handling from http's context doesn't work anymore. When I > visit http://mydomain.com/foo.txt I'm not redirected to http://google.com as > before but I see nginx's default 404 error page "404 Not found". > > here is my nginx.conf: http://pastebin.com/5nffYFvK > and my default site conf: http://pastebin.com/CYcEFnwx > > Is there something I overlooked about nginx's error_page directive? The error_page directives are inherited if and only if there is absolutely NO error_page directive on the current level. Moreover, whenever you use the error_page directive you are doing two things: 1) explicitly setting error pages for the specified error codes 2) implicitly resetting error pages for all the other error codes that are not explicitly set on the current level to their default values So your "error_page 418 http://nginx.org;" directive not only set the error page for error code 418, but also reset the error pages for error code 404 and all the other error codes to their default values. Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxyz at mail.ru Sun Feb 26 22:17:08 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 27 Feb 2012 02:17:08 +0400 Subject: custom 404 error_page seems to conflict with access_log off In-Reply-To: <20120226205725.GA67687@mdounin.ru> References: <20120226205725.GA67687@mdounin.ru> Message-ID: 27 ??????? 2012, 00:57 ?? Maxim Dounin : > Hello! > > On Sun, Feb 26, 2012 at 08:43:07PM +0100, Gr?gory Pakosz wrote: > > > Hello, > > > > In my server block, I configured a custom 404 error page and I tried to > > disable access log for /favicon.ico > > > > error_page 404 /404.html; > > location = /favicon.ico { > > access_log off; > > } > > > > http://pastebin.com/39qXWuq2 > > > > It seems that both conflicts. > > > > When favicon.ico is present: > > curl -I http://mydomain.com/favicon.ico reports 200 status code and nothing > > gets logged into my access log > > > > When favicon.ico is missing: > > curl -I http://mydomain.com/favicon.ico reports 404 status code > > curl http://mydomain.com/favicon.ico displays my custom 404 html page > > and strangely the 404 error gets logged into my access log: "HEAD > > /favicon.ico HTTP/1.1" 404 0 "-" "curl/7.19.7 (universal-apple-darwin10.0) > > libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3" > > > > When I comment out #error_page 404 /404.html; when favicon.ico is missing: > > curl -I http://mydomain.com/favicon.ico reports 404 status code > > curl http://mydomain.com/favicon.ico displays nginx's default 404 page > > nothing gets logged into my access log > > > > Can someone explain me this behavior? > > Requests are logged in a context of a location where processing ends. > > That is, if you have 404 error_page configured requests to a > missing favicon.ico file are internally redirected to /404.html, > and handled in an appropriate location, not in location = > /favicon.ico where you have access_log switched off. You may want to use the following to minimize the overhead caused by useless requests for the missing favicon.ico file: location = /favicon.ico { access_log off; return 204; } Max From appa at perusio.net Sun Feb 26 23:07:51 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 27 Feb 2012 00:07:51 +0100 Subject: custom 404 error_page seems to conflict with access_log off In-Reply-To: References: Message-ID: <8762et5h2w.wl%appa@perusio.net> On 26 Fev 2012 20h43 CET, gpakosz at yahoo.fr wrote: This is what I usually employ for battling the dreaded missing favicon error. location = /favicon.ico { access_log off; try_files $uri @empty; } location @empty { empty_gif; # send a in-memory 1x1 transparent GIF } If you prefer to just send a 204, do: location = /favicon.ico { access_log off; try_files $uri =204; } HTH, --- appa From nginx-forum at nginx.us Sun Feb 26 23:39:16 2012 From: nginx-forum at nginx.us (altiamge) Date: Sun, 26 Feb 2012 18:39:16 -0500 (EST) Subject: Regular Expression global redirect In-Reply-To: <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> References: <87ehti5hqz.wl%appa@perusio.net> <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> I still cant seem to get this working. I upgraded my PCRE libraries and recompiled/reinstalled a fresh nginx 1.0.12 # pcrecheck PCRE version 8.21 2011-12-12 Here is my server sections. Notice I have 2 server sections...the 1st section catches the WWW site and redirects it to the 2nd, non-www...right? I'm still getting: nginx: [emerg] unknown "domain" variable server { listen 80; server_name ^~www\.(?.*)$; return 301 http://$domain; } server { listen 80; server_name ^~(?[^\.]*)\.(?[^\.]*)$; location / { proxy_pass http://websites; } } When I try it with the P, everything (www and nonwww) get a white 301 nginx page: server { listen 80; server_name ^~www\.(?P.*)$; return 301 $scheme://$domain$request_uri;; } server { listen 80; server_name _; location / { proxy_pass http://websites; } } I tried making server_name in the 2nd block: server_name ^~(?P[^\.]*)\.(?[^\.]*)$; but I get this: nginx: [emerg] invalid server name or wildcard "^~(?p[^\.]*)\.(?[^\.]*)$" on 0.0.0.0:80 (fyi, the error has a lowercase p, server_name has it capitalized) Is there some other dependency I'm missing or am I just mangling the syntax? thanks, altimage Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222984,223018#msg-223018 From appa at perusio.net Mon Feb 27 00:21:26 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 27 Feb 2012 01:21:26 +0100 Subject: Regular Expression global redirect In-Reply-To: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> References: <87ehti5hqz.wl%appa@perusio.net> <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> Message-ID: <874nud5do9.wl%appa@perusio.net> On 27 Fev 2012 00h39 CET, nginx-forum at nginx.us wrote: > I still cant seem to get this working. I upgraded my PCRE libraries > and recompiled/reinstalled a fresh nginx 1.0.12 > > # pcrecheck > PCRE version 8.21 2011-12-12 > > Here is my server sections. Notice I have 2 server sections...the > 1st section catches the WWW site and redirects it to the 2nd, > non-www...right? I'm still getting: nginx: [emerg] unknown "domain" > variable > > server { > listen 80; > server_name ^~www\.(?.*)$; > return 301 http://$domain; > } > > server { > listen 80; > server_name ^~(?[^\.]*)\.(?[^\.]*)$; > location / { > proxy_pass http://websites; > } > } > > When I try it with the P, everything (www and nonwww) get a white > 301 nginx page: server { listen 80; server_name > ^~www\.(?P.*)$; return 301 $scheme://$domain$request_uri;; } > > server { > listen 80; > server_name _; > location / { > proxy_pass http://websites; > } > } > > I tried making server_name in the 2nd block: > server_name ^~(?P[^\.]*)\.(?[^\.]*)$; > > but I get this: > nginx: [emerg] invalid server name or wildcard > "^~(?p[^\.]*)\.(?[^\.]*)$" on 0.0.0.0:80 > (fyi, the error has a lowercase p, server_name has it capitalized) > > Is there some other dependency I'm missing or am I just mangling the > syntax? Ok. It seems that your PCRE library has problems with the non P syntax for named captures. So you cannot mix both. server { listen 80; server_name ^~www\.(?P.*)$; return 301 $scheme://$domain$request_uri; } server { listen 80; server_name ^~(?P[^\.]*)\.(?P[^\.]*)$; location / { proxy_pass http://$domain_name.$tld; } } This should work [1]. --- appa [1] http://nginx.org/en/docs/http/server_names.html From nginx-forum at nginx.us Mon Feb 27 00:26:24 2012 From: nginx-forum at nginx.us (altiamge) Date: Sun, 26 Feb 2012 19:26:24 -0500 (EST) Subject: Regular Expression global redirect In-Reply-To: <874nud5do9.wl%appa@perusio.net> References: <874nud5do9.wl%appa@perusio.net> Message-ID: No Luck...I'm still getting this: nginx: [emerg] unknown "domain" variable. thanks, altimage Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222984,223020#msg-223020 From appa at perusio.net Mon Feb 27 00:41:11 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 27 Feb 2012 01:41:11 +0100 Subject: Regular Expression global redirect In-Reply-To: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> References: <87ehti5hqz.wl%appa@perusio.net> <78b5490502391577fd9de5095a4362d8.NginxMailingListEnglish@forum.nginx.org> <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> Message-ID: <871uph5crc.wl%appa@perusio.net> On 27 Fev 2012 00h39 CET, nginx-forum at nginx.us wrote: > I still cant seem to get this working. I upgraded my PCRE libraries > and recompiled/reinstalled a fresh nginx 1.0.12 > > # pcrecheck > PCRE version 8.21 2011-12-12 > > Here is my server sections. Notice I have 2 server sections...the > 1st section catches the WWW site and redirects it to the 2nd, > non-www...right? I'm still getting: nginx: [emerg] unknown "domain" > variable > > server { > listen 80; > server_name ^~www\.(?.*)$; > return 301 http://$domain; > } > > server { > listen 80; > server_name ^~(?[^\.]*)\.(?[^\.]*)$; > location / { > proxy_pass http://websites; > } > } > > When I try it with the P, everything (www and nonwww) get a white > 301 nginx page: server { listen 80; server_name > ^~www\.(?P.*)$; return 301 $scheme://$domain$request_uri;; } > > server { > listen 80; > server_name _; > location / { > proxy_pass http://websites; > } > } > > I tried making server_name in the 2nd block: > server_name ^~(?P[^\.]*)\.(?[^\.]*)$; > > but I get this: > nginx: [emerg] invalid server name or wildcard > "^~(?p[^\.]*)\.(?[^\.]*)$" on 0.0.0.0:80 > (fyi, the error has a lowercase p, server_name has it capitalized) > > Is there some other dependency I'm missing or am I just mangling the > syntax? Oops. I erroneously switched the '^' and '~'. It's ~^ not ^~. Solly :( Ok. It seems that your PCRE library has problems with the non P syntax for named captures. So you cannot mix both. server { listen 80; server_name ~^www\.(?P.*)$; return 301 $scheme://$domain$request_uri; } server { listen 80; server_name ~^(?P[^\.]*)\.(?P[^\.]*)$; location / { proxy_pass http://$domain_name.$tld; } } This should work [1]. --- appa [1] http://nginx.org/en/docs/http/server_names.html From nginx-forum at nginx.us Mon Feb 27 01:15:17 2012 From: nginx-forum at nginx.us (altiamge) Date: Sun, 26 Feb 2012 20:15:17 -0500 (EST) Subject: Regular Expression global redirect In-Reply-To: References: <874nud5do9.wl%appa@perusio.net> Message-ID: That did the trick! Thank you so much for all your help. altimage Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222984,223022#msg-223022 From nginx-forum at nginx.us Mon Feb 27 01:15:24 2012 From: nginx-forum at nginx.us (edo888) Date: Sun, 26 Feb 2012 20:15:24 -0500 (EST) Subject: How to make a post subrequest with response body from parent?. In-Reply-To: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> References: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7835b70230e5ce63238dbe5de5392b9b.NginxMailingListEnglish@forum.nginx.org> Digging around I was able to get this code so far: in my module's body filter I do this: if(ngx_http_subrequest(r, &uri, &query_string, &sr, NULL, 0) != NGX_OK) return NGX_ERROR; // adjust subrequest ngx_str_t method_name = ngx_string("POST"); sr->method = NGX_HTTP_POST; sr->method_name = method_name; First of all it still uses GET with fastcgi_pass. If I do proxy_pass instead the method is POST. I think it is a bug in fastcgi module. The above code should be OK so far, since it works with proxy_pass. Now I don't know how to pass post data to subrequest. I have tried to create a buffer chain with sample data in it and using sr->request_body->bufs, however it doesn't work. Am I right that if I populate sr->request_body->bufs they will be written to subrequest request body? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222983,223023#msg-223023 From nginx-forum at nginx.us Mon Feb 27 01:27:05 2012 From: nginx-forum at nginx.us (edo888) Date: Sun, 26 Feb 2012 20:27:05 -0500 (EST) Subject: How to make a post subrequest with response body from parent?. In-Reply-To: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> References: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53cc3be86ea9751f8536d00eb53b9da8.NginxMailingListEnglish@forum.nginx.org> May be I need to use ngx_http_write_request_body(sr, sr->request_body->buffs)? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222983,223024#msg-223024 From luci at Conexim.com.au Mon Feb 27 06:18:00 2012 From: luci at Conexim.com.au (Lucian D. Kafka) Date: Mon, 27 Feb 2012 06:18:00 +0000 Subject: How to log virtual server name Message-ID: Hi All, We have a virtual hosting setup where multiple domains are delegated to the same server IP address, and Nginx acts as a caching proxy server in front of the web server. I am encountering 2 issues: 1. Cannot set more than 1 server name - Nginx is ignoring multiple server names defined on the same IP with a warning message 2. Cannot log the virtual server name (Apache %v equivalent) to the access_log. Any variable in the custom log format - ie. $server_name, $host, etc does not log the Host headers, but rather the server name string set with the server_name directive (if matched). This makes it impossible to have a combined log file for different sites set in a virtual hosting environment on the same IP address. Any ideas how to get this working? Thank you, Luci -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Feb 27 06:22:39 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 27 Feb 2012 10:22:39 +0400 Subject: How to log virtual server name In-Reply-To: References: Message-ID: <20120227062239.GA24213@nginx.com> On Mon, Feb 27, 2012 at 06:18:00AM +0000, Lucian D. Kafka wrote: > Hi All, > > We have a virtual hosting setup where multiple domains are delegated to the same server IP address, and Nginx acts as a caching proxy server in front of the web server. > > I am encountering 2 issues: > > 1. Cannot set more than 1 server name - Nginx is ignoring multiple server names defined on the same IP with a warning message This may help: http://nginx.org/en/docs/http/server_names.html > 2. Cannot log the virtual server name (Apache %v equivalent) to the access_log. Any variable in the custom log format - ie. $server_name, $host, etc does not log the Host headers, but rather the server name string set with the server_name directive (if matched). This makes it impossible to have a combined log file for different sites set in a virtual hosting environment on the same IP address. $http_host. -- Igor Sysoev From gelonida at gmail.com Sat Feb 25 20:59:04 2012 From: gelonida at gmail.com (Gelonida N) Date: Sat, 25 Feb 2012 21:59:04 +0100 Subject: easy way to log ssl client-certificate errors with nginx? Message-ID: YTitle says it all. I'd like to have logs about rejected client-certificates (expiration / wrong site-name, etc) What would be the right way to get such log files From piotr.sikora at frickle.com Mon Feb 27 06:25:47 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Mon, 27 Feb 2012 07:25:47 +0100 Subject: How to log virtual server name In-Reply-To: References: Message-ID: <52881E36774644C4A0650731A78A5290@Desktop> Hi, > 1. Cannot set more than 1 server name ? Nginx is ignoring multiple > server names defined on the same IP with a warning message http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name You want to specify multiple server names within one directive, do not use multiple directives. > 2. Cannot log the virtual server name (Apache %v equivalent) to the > access_log. Any variable in the custom log format ? ie. $server_name, > $host, etc does not log the Host headers, but rather the server name > string set with the server_name directive (if matched). This makes it > impossible to have a combined log file for different sites set in a > virtual hosting environment on the same IP address. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables "$host" works just fine, you must have tested it wrong. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From nginxyz at mail.ru Mon Feb 27 06:33:39 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Mon, 27 Feb 2012 10:33:39 +0400 Subject: Regular Expression global redirect In-Reply-To: <871uph5crc.wlappa@perusio.net> References: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> <87ehti5hqz.wl%appa@perusio.net> <871uph5crc.wlappa@perusio.net> Message-ID: 27 ??????? 2012, 04:41 ?? Ant?nio P. P. Almeida : > On 27 Fev 2012 00h39 CET, nginx-forum at nginx.us wrote: > > > I still cant seem to get this working. I upgraded my PCRE libraries > > and recompiled/reinstalled a fresh nginx 1.0.12 > > > > # pcrecheck > > PCRE version 8.21 2011-12-12 > > > > Here is my server sections. Notice I have 2 server sections...the > > 1st section catches the WWW site and redirects it to the 2nd, > > non-www...right? I'm still getting: nginx: [emerg] unknown "domain" > > variable > > > > server { > > listen 80; > > server_name ^~www\.(?.*)$; > > return 301 http://$domain; > > } > > > > server { > > listen 80; > > server_name ^~(?[^\.]*)\.(?[^\.]*)$; > > location / { > > proxy_pass http://websites; > > } > > } > > > > When I try it with the P, everything (www and nonwww) get a white > > 301 nginx page: server { listen 80; server_name > > ^~www\.(?P.*)$; return 301 $scheme://$domain$request_uri;; } > > > > server { > > listen 80; > > server_name _; > > location / { > > proxy_pass http://websites; > > } > > } > > > > I tried making server_name in the 2nd block: > > server_name ^~(?P[^\.]*)\.(?[^\.]*)$; > > > > but I get this: > > nginx: [emerg] invalid server name or wildcard > > "^~(?p[^\.]*)\.(?[^\.]*)$" on 0.0.0.0:80 > > (fyi, the error has a lowercase p, server_name has it capitalized) > > > > Is there some other dependency I'm missing or am I just mangling the > > syntax? > > Oops. I erroneously switched the '^' and '~'. It's ~^ not ^~. Solly :( > > Ok. It seems that your PCRE library has problems with the non P syntax > for named captures. So you cannot mix both. > > server { > listen 80; > server_name ~^www\.(?P.*)$; > return 301 $scheme://$domain$request_uri; > } > > server { > listen 80; > server_name ~^(?P[^\.]*)\.(?P[^\.]*)$; > location / { > proxy_pass http://$domain_name.$tld; > } > } > > This should work [1]. Your solution, while syntactically correct, is wrong by design. What you created there is an open anonymizing proxy that will pass any request from anyone to any host:port combination that contains only the domain name and the TLD, if a functional resolver has been set up using the resolver directive. Take a guess what this would do: $ nc frontend 80 GET /a/clue HTTP/1.0 Host: fbi.gov:22 You should never pass unsanitized user input to pass_proxy, unless you want people to abuse your open anonymizing proxy for illegal activities that will get you in trouble. Good luck convincing the FBI that your incompetence was the real culprit. Moreover, the frontend server will pass all requests for "http://$domain_name.$tld" that would have normally been passed on to the backend server on to itself to create a nasty loop, unless you happen to have split horizon DNS set up with the resolver set to the internal DNS server that maps the value of "$domain_name.$tld" to an internal IP. But if you had that kind of setup, you'd use it to do the mapping in the first place instead of doing what you've been trying to do. This is what your solution does if a functional resolver has been set up: http://www.domain.tld -> status code 301 with "Location: http://domain.tld" http://own-domain.tld -> proxy_pass LOOP to http://own-domain.tld http://foreign-domain.tld:port -> OPEN ANONYMIZING PROXY to foreign-domain.tld:port If no resolver has been set up, proxy_pass will fail due to being unable to resolve the value of "$domain.$tld" for any request that contains only the domain name and the TLD. Here's one of the correct ways to do what the OP wants to do: map $http_host $wwwless_http_host { hostnames; default $http_host; ~^www\.(?P.*)$ $domain; } server { listen 80 default_server; server_name _; location / { proxy_set_header Host $wwwless_http_host; proxy_pass http://backend; } } It would be a good idea to also allow only hosts and domains that you actually host, which could be done like this: map $http_host $own_http_host { hostnames; default 0; include nginx.own-domains.map; } server { listen 80 default_server; server_name _; if ($own_http_host = 0) { # Not one of our hosts / domains, so terminate the connection return 444; } location / { proxy_set_header Host $own_http_host; proxy_pass http://backend; } } The nginx.own-domains.map file would contain entries such as: .domain.org domain.org; # map *.domain.org to domain.org www.another.net another.net; # map only www.another.net to another.net This file could be generated automatically from DNS zone files, so it would be easy to maintain. Max From agentzh at gmail.com Mon Feb 27 09:24:04 2012 From: agentzh at gmail.com (agentzh) Date: Mon, 27 Feb 2012 17:24:04 +0800 Subject: How to merge subrequest header. In-Reply-To: <0b631a50230fc5a43ff193064ff6539a.NginxMailingListEnglish@forum.nginx.org> References: <0b631a50230fc5a43ff193064ff6539a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sun, Feb 26, 2012 at 8:02 AM, edo888 wrote: > Can you please share some piece of code. I want to make a post > subrequest to send the response body to fastcgi and filter it there. > See the detailed discussions here: https://github.com/agentzh/echo-nginx-module/issues/8 Best regards, -agentzh From appa at perusio.net Mon Feb 27 10:13:29 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Mon, 27 Feb 2012 11:13:29 +0100 Subject: Regular Expression global redirect In-Reply-To: References: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> <87ehti5hqz.wl%appa@perusio.net> <871uph5crc.wlappa@perusio.net> Message-ID: <87wr784m9i.wl%appa@perusio.net> On 27 Fev 2012 07h33 CET, nginxyz at mail.ru wrote: > Your solution, while syntactically correct, is wrong by design. > What you created there is an open anonymizing proxy that will pass > any request from anyone to any host:port combination that contains > only the domain name and the TLD, if a functional resolver has been > set up using the resolver directive. Take a guess what this would > do: This deals with illegal Host headers: server { listen 80 default_server; server_name _; server_name_in_redirect off; return 444; } --- appa From vicosoft at gmail.com Mon Feb 27 10:18:48 2012 From: vicosoft at gmail.com (vicosoft at gmail.com) Date: Mon, 27 Feb 2012 11:18:48 +0100 Subject: Reverse proxy, subdomain Message-ID: Hi, Currently I have in operation a reverse proxy with nginx. All good. The problem is that now need to include in the configuration, the reverse proxy, a subdomain that points to another server. As I can do? You have some sample configuration for this? My apologies for bad English. Thanks! --- *#* Jose Antonio Vico Palomino E-Mail: vicosoft at gmail.com *Algunos Blogs en los que participo:* *Coraz?n de La Mancha. - www.vicosoft.org **iCloud Me. - icloudme.es * *Mobile Me. - www.mobileme.es* *ManchegoX. - www.manchegox.org* *Todos con Software Libre. - www.todosconsoftwarelibre.es* * * * * Referencias y medios sociales *Twitter: http://twitter.com/vicosoft* *LinkedIN : http://es.linkedin.com/in/javico* *Facebook : http://www.facebook.com/Quijote* ------------------------------ P Antes de imprimir este mensaje o sus ficheros adjuntos, por favor compruebe que es verdaderamente necesario. El Medio Ambiente es cosa de todos. Este mensaje se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma v?a y proceda a su destrucci?n. This message is intended exclusively for its addressee and may contain information that is CONFIDENTIAL and protected by professional privilege. If you are not the intended recipient you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited by law. If this message has been received in error, please immediately notify us via e-mail and delete it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vicosoft at gmail.com Mon Feb 27 10:44:26 2012 From: vicosoft at gmail.com (vicosoft at gmail.com) Date: Mon, 27 Feb 2012 11:44:26 +0100 Subject: Reverse proxy, subdomain In-Reply-To: References: Message-ID: This is my actualy nginx.conf http://pastebin.com/VzVXftck thanks --- *#* Jose Antonio Vico Palomino E-Mail: vicosoft at gmail.com *Algunos Blogs en los que participo:* *Coraz?n de La Mancha. - www.vicosoft.org **iCloud Me. - icloudme.es * *Mobile Me. - www.mobileme.es* *ManchegoX. - www.manchegox.org* *Todos con Software Libre. - www.todosconsoftwarelibre.es* * * * * Referencias y medios sociales *Twitter: http://twitter.com/vicosoft* *LinkedIN : http://es.linkedin.com/in/javico* *Facebook : http://www.facebook.com/Quijote* ------------------------------ P Antes de imprimir este mensaje o sus ficheros adjuntos, por favor compruebe que es verdaderamente necesario. El Medio Ambiente es cosa de todos. Este mensaje se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma v?a y proceda a su destrucci?n. This message is intended exclusively for its addressee and may contain information that is CONFIDENTIAL and protected by professional privilege. If you are not the intended recipient you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited by law. If this message has been received in error, please immediately notify us via e-mail and delete it. 2012/2/27 vicosoft at gmail.com > Hi, > > Currently I have in operation a reverse proxy with nginx. All good. > > The problem is that now need to include in the configuration, the reverse > proxy, a subdomain that points to another server. > > As I can do? You have some sample configuration for this? > > My apologies for bad English. > > Thanks! > --- > *#* Jose Antonio Vico Palomino > E-Mail: vicosoft at gmail.com > > *Algunos Blogs en los que participo:* > *Coraz?n de La Mancha. - www.vicosoft.org > **iCloud Me. - icloudme.es * > *Mobile Me. - www.mobileme.es* > *ManchegoX. - www.manchegox.org* > *Todos con Software Libre. - www.todosconsoftwarelibre.es* > * > * > * > * > Referencias y medios sociales > *Twitter: http://twitter.com/vicosoft* > *LinkedIN : http://es.linkedin.com/in/javico* > *Facebook : http://www.facebook.com/Quijote* > ------------------------------ > P Antes de imprimir este mensaje o sus ficheros adjuntos, por favor > compruebe que es verdaderamente necesario. El Medio Ambiente es cosa de > todos. > Este mensaje se dirige exclusivamente a su destinatario y puede contener > informaci?n privilegiada o confidencial. Si no es vd. el destinatario > indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin > autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha > recibido este mensaje por error, le rogamos que nos lo comunique > inmediatamente por esta misma v?a y proceda a su destrucci?n. > This message is intended exclusively for its addressee and may contain > information that is CONFIDENTIAL and protected by professional privilege. > If you are not the intended recipient you are hereby notified that any > dissemination, copy or disclosure of this communication is strictly > prohibited by law. If this message has been received in error, please > immediately notify us via e-mail and delete it. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vicosoft at gmail.com Mon Feb 27 11:19:49 2012 From: vicosoft at gmail.com (vicosoft at gmail.com) Date: Mon, 27 Feb 2012 12:19:49 +0100 Subject: Reverse proxy, subdomain In-Reply-To: References: Message-ID: ok, I've solved using the IF statement. for example: if ($http_host = m.mua.es) { .... Thanks. --- *#* Jose Antonio Vico Palomino E-Mail: vicosoft at gmail.com *Algunos Blogs en los que participo:* *Coraz?n de La Mancha. - www.vicosoft.org **iCloud Me. - icloudme.es * *Mobile Me. - www.mobileme.es* *ManchegoX. - www.manchegox.org* *Todos con Software Libre. - www.todosconsoftwarelibre.es* * * * * Referencias y medios sociales *Twitter: http://twitter.com/vicosoft* *LinkedIN : http://es.linkedin.com/in/javico* *Facebook : http://www.facebook.com/Quijote* ------------------------------ P Antes de imprimir este mensaje o sus ficheros adjuntos, por favor compruebe que es verdaderamente necesario. El Medio Ambiente es cosa de todos. Este mensaje se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente por esta misma v?a y proceda a su destrucci?n. This message is intended exclusively for its addressee and may contain information that is CONFIDENTIAL and protected by professional privilege. If you are not the intended recipient you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited by law. If this message has been received in error, please immediately notify us via e-mail and delete it. 2012/2/27 vicosoft at gmail.com > Hi, > > Currently I have in operation a reverse proxy with nginx. All good. > > The problem is that now need to include in the configuration, the reverse > proxy, a subdomain that points to another server. > > As I can do? You have some sample configuration for this? > > My apologies for bad English. > > Thanks! > --- > *#* Jose Antonio Vico Palomino > E-Mail: vicosoft at gmail.com > > *Algunos Blogs en los que participo:* > *Coraz?n de La Mancha. - www.vicosoft.org > **iCloud Me. - icloudme.es * > *Mobile Me. - www.mobileme.es* > *ManchegoX. - www.manchegox.org* > *Todos con Software Libre. - www.todosconsoftwarelibre.es* > * > * > * > * > Referencias y medios sociales > *Twitter: http://twitter.com/vicosoft* > *LinkedIN : http://es.linkedin.com/in/javico* > *Facebook : http://www.facebook.com/Quijote* > ------------------------------ > P Antes de imprimir este mensaje o sus ficheros adjuntos, por favor > compruebe que es verdaderamente necesario. El Medio Ambiente es cosa de > todos. > Este mensaje se dirige exclusivamente a su destinatario y puede contener > informaci?n privilegiada o confidencial. Si no es vd. el destinatario > indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin > autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha > recibido este mensaje por error, le rogamos que nos lo comunique > inmediatamente por esta misma v?a y proceda a su destrucci?n. > This message is intended exclusively for its addressee and may contain > information that is CONFIDENTIAL and protected by professional privilege. > If you are not the intended recipient you are hereby notified that any > dissemination, copy or disclosure of this communication is strictly > prohibited by law. If this message has been received in error, please > immediately notify us via e-mail and delete it. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpakosz at yahoo.fr Mon Feb 27 14:19:38 2012 From: gpakosz at yahoo.fr (=?ISO-8859-1?Q?Gr=E9gory_Pakosz?=) Date: Mon, 27 Feb 2012 15:19:38 +0100 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: References: Message-ID: > > The error_page directives are inherited if and only if there is absolutely > NO error_page directive on the current level. Moreover, whenever you use > the error_page directive you are doing two things: 1) explicitly setting > error pages for the specified error codes 2) implicitly resetting error > pages for all the other error codes that are not explicitly set on the > current level to their default values So your "error_page 418 > http://nginx.org;" directive not only set the error page for error code > 418, but also reset the error pages for error code 404 and all the other > error codes to their default values. Max > > Hi Max, Thank you for your answer. May I suggest that explanation enters the wiki in the error_page directive section? Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From iqbal at aroussi.name Mon Feb 27 14:48:06 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 14:48:06 +0000 Subject: Compilation error on CentOS-5.7 Message-ID: Hi, Can you please help me overcome this problem. I'm trying to compile Nginx + nginx_udplog_module from source but I get this error: *cc1: warnings being treated as errors* I tried with *nginx-1.0.0* the version in our production servers and * nginx-1.0.12* the latest stable version. Best Regards *Compilation env:* CentOS release 5.7 (Final) *uname -a* Linux 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 x86_64 x86_64 GNU/Linux *gcc -v* Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --disable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux Thread model: posix gcc version 4.1.2 20080704 (Red Hat 4.1.2-51) *Configure options:* ./configure --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 *Output:* checking for OS + Linux 2.6.18-274.18.1.el5 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for accept4() ... not found checking for kqueue AIO support ... not found checking for Linux AIO support ... found configuring additional modules adding module in /root/nginx_udplog_module-1.0.0 + ngx_http_udplog_module was configured checking for PCRE library ... found checking for OpenSSL library ... found checking for zlib library ... found checking for libxslt ... found checking for libexslt ... found checking for GD library ... found checking for perl + perl version: v5.8.8 built for x86_64-linux-thread-multi + perl interpreter multiplicity found checking for GeoIP library ... found creating objs/Makefile checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for AF_INET6 ... found checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1 library is not used + using system zlib library nginx path prefix: "/usr/share/nginx" nginx binary file: "/usr/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/var/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "/var/lib/nginx/tmp/client_body" nginx http proxy temporary files: "/var/lib/nginx/tmp/proxy" nginx http fastcgi temporary files: "/var/lib/nginx/tmp/fastcgi" nginx http uwsgi temporary files: "/var/lib/nginx/tmp/uwsgi" nginx http scgi temporary files: "/var/lib/nginx/tmp/scgi" *Make:* make -f objs/Makefile make[1]: Entering directory `/root/nginx-1.0.0' gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ -o objs/src/core/nginx.o \ src/core/nginx.c cc1: warnings being treated as errors src/core/nginx.c: In function ?ngx_set_user?: src/core/nginx.c:1105: warning: unused parameter ?cmd? src/core/nginx.c: In function ?ngx_set_env?: src/core/nginx.c:1168: warning: unused parameter ?cmd? src/core/nginx.c: In function ?ngx_set_priority?: src/core/nginx.c:1198: warning: unused parameter ?cmd? src/core/nginx.c: In function ?ngx_set_cpu_affinity?: src/core/nginx.c:1238: warning: unused parameter ?cmd? make[1]: *** [objs/src/core/nginx.o] Error 1 make[1]: Leaving directory `/root/nginx-1.0.0' make: *** [build] Error 2 *Sincerely yours* *--* *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.landas at chinanetcloud.com Mon Feb 27 14:50:42 2012 From: adam.landas at chinanetcloud.com (Adam Landas) Date: Mon, 27 Feb 2012 22:50:42 +0800 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: Message-ID: Hello, This is because nginx is compiled with the -Werror flag, remove this from your objs/Makefile file, and you should be fine. Regards, Adam Adam LANDAS | Operations | ChinaNetCloud | www.ChinaNetCloud.com Phone: +86 (21) 6422-1946 | adam.landas at chinanetcloud.com | Skype: adamlandas X2 Space 1-601, 1238 Xietu Lu, Shanghai 200032, China OnDemand 2011 "100 Top Private Company" - We are hiring! www.chinanetcloud.com/jobs On Mon, Feb 27, 2012 at 10:48 PM, Iqbal Aroussi wrote: > Hi, > > Can you please help me overcome this problem. I'm trying to compile Nginx > + nginx_udplog_module from source but I get this error: *cc1: warnings > being treated as errors* > I tried with *nginx-1.0.0* the version in our production servers and * > nginx-1.0.12* the latest stable version. > > Best Regards > > *Compilation env:* > CentOS release 5.7 (Final) > *uname -a* > Linux 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 > x86_64 x86_64 GNU/Linux > > *gcc -v* > Using built-in specs. > Target: x86_64-redhat-linux > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --enable-checking=release --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-libgcj-multifile > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi --disable-plugin > --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic > --host=x86_64-redhat-linux > Thread model: posix > gcc version 4.1.2 20080704 (Red Hat 4.1.2-51) > > *Configure options:* > ./configure --user=nginx --group=nginx --prefix=/usr/share/nginx > --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid > --lock-path=/var/lock/subsys/nginx --with-http_ssl_module > --with-http_realip_module --with-http_addition_module > --with-http_xslt_module --with-http_image_filter_module > --with-http_geoip_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_stub_status_module > --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module > --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 > > *Output:* > checking for OS > + Linux 2.6.18-274.18.1.el5 x86_64 > checking for C compiler ... found > + using GNU C compiler > + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) > checking for gcc -pipe switch ... found > checking for gcc builtin atomic operations ... found > checking for C99 variadic macros ... found > checking for gcc variadic macros ... found > checking for unistd.h ... found > checking for inttypes.h ... found > checking for limits.h ... found > checking for sys/filio.h ... not found > checking for sys/param.h ... found > checking for sys/mount.h ... found > checking for sys/statvfs.h ... found > checking for crypt.h ... found > checking for Linux specific features > checking for epoll ... found > checking for sendfile() ... found > checking for sendfile64() ... found > checking for sys/prctl.h ... found > checking for prctl(PR_SET_DUMPABLE) ... found > checking for sched_setaffinity() ... found > checking for crypt_r() ... found > checking for sys/vfs.h ... found > checking for poll() ... found > checking for /dev/poll ... not found > checking for kqueue ... not found > checking for crypt() ... not found > checking for crypt() in libcrypt ... found > checking for F_READAHEAD ... not found > checking for posix_fadvise() ... found > checking for O_DIRECT ... found > checking for F_NOCACHE ... not found > checking for directio() ... not found > checking for statfs() ... found > checking for statvfs() ... found > checking for dlopen() ... not found > checking for dlopen() in libdl ... found > checking for sched_yield() ... found > checking for SO_SETFIB ... not found > checking for accept4() ... not found > checking for kqueue AIO support ... not found > checking for Linux AIO support ... found > configuring additional modules > adding module in /root/nginx_udplog_module-1.0.0 > + ngx_http_udplog_module was configured > checking for PCRE library ... found > checking for OpenSSL library ... found > checking for zlib library ... found > checking for libxslt ... found > checking for libexslt ... found > checking for GD library ... found > checking for perl > + perl version: v5.8.8 built for x86_64-linux-thread-multi > + perl interpreter multiplicity found > checking for GeoIP library ... found > creating objs/Makefile > checking for int size ... 4 bytes > checking for long size ... 8 bytes > checking for long long size ... 8 bytes > checking for void * size ... 8 bytes > checking for uint64_t ... found > checking for sig_atomic_t ... found > checking for sig_atomic_t size ... 4 bytes > checking for socklen_t ... found > checking for in_addr_t ... found > checking for in_port_t ... found > checking for rlim_t ... found > checking for uintptr_t ... uintptr_t found > checking for system endianess ... little endianess > checking for size_t size ... 8 bytes > checking for off_t size ... 8 bytes > checking for time_t size ... 8 bytes > checking for AF_INET6 ... found > checking for setproctitle() ... not found > checking for pread() ... found > checking for pwrite() ... found > checking for sys_nerr ... found > checking for localtime_r() ... found > checking for posix_memalign() ... found > checking for memalign() ... found > checking for mmap(MAP_ANON|MAP_SHARED) ... found > checking for mmap("/dev/zero", MAP_SHARED) ... found > checking for System V shared memory ... found > checking for struct msghdr.msg_control ... found > checking for ioctl(FIONBIO) ... found > checking for struct tm.tm_gmtoff ... found > checking for struct dirent.d_namlen ... not found > checking for struct dirent.d_type ... found > > Configuration summary > + using system PCRE library > + using system OpenSSL library > + md5: using OpenSSL library > + sha1 library is not used > + using system zlib library > > nginx path prefix: "/usr/share/nginx" > nginx binary file: "/usr/sbin/nginx" > nginx configuration prefix: "/etc/nginx" > nginx configuration file: "/etc/nginx/nginx.conf" > nginx pid file: "/var/run/nginx.pid" > nginx error log file: "/var/log/nginx/error.log" > nginx http access log file: "/var/log/nginx/access.log" > nginx http client request body temporary files: > "/var/lib/nginx/tmp/client_body" > nginx http proxy temporary files: "/var/lib/nginx/tmp/proxy" > nginx http fastcgi temporary files: "/var/lib/nginx/tmp/fastcgi" > nginx http uwsgi temporary files: "/var/lib/nginx/tmp/uwsgi" > nginx http scgi temporary files: "/var/lib/nginx/tmp/scgi" > > *Make:* > make -f objs/Makefile > make[1]: Entering directory `/root/nginx-1.0.0' > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ > -o objs/src/core/nginx.o \ > src/core/nginx.c > cc1: warnings being treated as errors > src/core/nginx.c: In function ?ngx_set_user?: > src/core/nginx.c:1105: warning: unused parameter ?cmd? > src/core/nginx.c: In function ?ngx_set_env?: > src/core/nginx.c:1168: warning: unused parameter ?cmd? > src/core/nginx.c: In function ?ngx_set_priority?: > src/core/nginx.c:1198: warning: unused parameter ?cmd? > src/core/nginx.c: In function ?ngx_set_cpu_affinity?: > src/core/nginx.c:1238: warning: unused parameter ?cmd? > make[1]: *** [objs/src/core/nginx.o] Error 1 > make[1]: Leaving directory `/root/nginx-1.0.0' > make: *** [build] Error 2 > > > *Sincerely yours* > *--* > *Iqbal Aroussi* > *+212 665 025 032* > *iqbal at aroussi.name* > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 27 14:53:02 2012 From: nginx-forum at nginx.us (trojan2748) Date: Mon, 27 Feb 2012 09:53:02 -0500 (EST) Subject: Compilation error on CentOS-5.7 In-Reply-To: References: Message-ID: Hello, This is because nginx is compiled with the -Werror flag, remove this from your objs/Makefile file, and you should be fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223048,223050#msg-223050 From iqbal at aroussi.name Mon Feb 27 15:00:03 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 15:00:03 +0000 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: Message-ID: Hi Adam, Awesome, Thank you so much. worked great. Best Regards * -- * *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* On Mon, Feb 27, 2012 at 14:50, Adam Landas wrote: > Hello, > This is because nginx is compiled with the -Werror flag, remove this > from your objs/Makefile file, and you should be fine. > > > Regards, > > Adam > Adam LANDAS | Operations | ChinaNetCloud | www.ChinaNetCloud.com > Phone: +86 (21) 6422-1946 | adam.landas at chinanetcloud.com | Skype: > adamlandas > X2 Space 1-601, 1238 Xietu Lu, Shanghai 200032, China > OnDemand 2011 "100 Top Private Company" - We are hiring! > www.chinanetcloud.com/jobs > > > > On Mon, Feb 27, 2012 at 10:48 PM, Iqbal Aroussi wrote: > >> Hi, >> >> Can you please help me overcome this problem. I'm trying to compile Nginx >> + nginx_udplog_module from source but I get this error: *cc1: warnings >> being treated as errors* >> I tried with *nginx-1.0.0* the version in our production servers and * >> nginx-1.0.12* the latest stable version. >> >> Best Regards >> >> *Compilation env:* >> CentOS release 5.7 (Final) >> *uname -a* >> Linux 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 >> x86_64 x86_64 GNU/Linux >> >> *gcc -v* >> Using built-in specs. >> Target: x86_64-redhat-linux >> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man >> --infodir=/usr/share/info --enable-shared --enable-threads=posix >> --enable-checking=release --with-system-zlib --enable-__cxa_atexit >> --disable-libunwind-exceptions --enable-libgcj-multifile >> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada >> --enable-java-awt=gtk --disable-dssi --disable-plugin >> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic >> --host=x86_64-redhat-linux >> Thread model: posix >> gcc version 4.1.2 20080704 (Red Hat 4.1.2-51) >> >> *Configure options:* >> ./configure --user=nginx --group=nginx --prefix=/usr/share/nginx >> --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log >> --http-log-path=/var/log/nginx/access.log >> --http-client-body-temp-path=/var/lib/nginx/tmp/client_body >> --http-proxy-temp-path=/var/lib/nginx/tmp/proxy >> --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi >> --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi >> --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid >> --lock-path=/var/lock/subsys/nginx --with-http_ssl_module >> --with-http_realip_module --with-http_addition_module >> --with-http_xslt_module --with-http_image_filter_module >> --with-http_geoip_module --with-http_sub_module --with-http_dav_module >> --with-http_flv_module --with-http_gzip_static_module >> --with-http_random_index_module --with-http_secure_link_module >> --with-http_degradation_module --with-http_stub_status_module >> --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module >> --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 >> -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 >> -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 >> >> *Output:* >> checking for OS >> + Linux 2.6.18-274.18.1.el5 x86_64 >> checking for C compiler ... found >> + using GNU C compiler >> + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) >> checking for gcc -pipe switch ... found >> checking for gcc builtin atomic operations ... found >> checking for C99 variadic macros ... found >> checking for gcc variadic macros ... found >> checking for unistd.h ... found >> checking for inttypes.h ... found >> checking for limits.h ... found >> checking for sys/filio.h ... not found >> checking for sys/param.h ... found >> checking for sys/mount.h ... found >> checking for sys/statvfs.h ... found >> checking for crypt.h ... found >> checking for Linux specific features >> checking for epoll ... found >> checking for sendfile() ... found >> checking for sendfile64() ... found >> checking for sys/prctl.h ... found >> checking for prctl(PR_SET_DUMPABLE) ... found >> checking for sched_setaffinity() ... found >> checking for crypt_r() ... found >> checking for sys/vfs.h ... found >> checking for poll() ... found >> checking for /dev/poll ... not found >> checking for kqueue ... not found >> checking for crypt() ... not found >> checking for crypt() in libcrypt ... found >> checking for F_READAHEAD ... not found >> checking for posix_fadvise() ... found >> checking for O_DIRECT ... found >> checking for F_NOCACHE ... not found >> checking for directio() ... not found >> checking for statfs() ... found >> checking for statvfs() ... found >> checking for dlopen() ... not found >> checking for dlopen() in libdl ... found >> checking for sched_yield() ... found >> checking for SO_SETFIB ... not found >> checking for accept4() ... not found >> checking for kqueue AIO support ... not found >> checking for Linux AIO support ... found >> configuring additional modules >> adding module in /root/nginx_udplog_module-1.0.0 >> + ngx_http_udplog_module was configured >> checking for PCRE library ... found >> checking for OpenSSL library ... found >> checking for zlib library ... found >> checking for libxslt ... found >> checking for libexslt ... found >> checking for GD library ... found >> checking for perl >> + perl version: v5.8.8 built for x86_64-linux-thread-multi >> + perl interpreter multiplicity found >> checking for GeoIP library ... found >> creating objs/Makefile >> checking for int size ... 4 bytes >> checking for long size ... 8 bytes >> checking for long long size ... 8 bytes >> checking for void * size ... 8 bytes >> checking for uint64_t ... found >> checking for sig_atomic_t ... found >> checking for sig_atomic_t size ... 4 bytes >> checking for socklen_t ... found >> checking for in_addr_t ... found >> checking for in_port_t ... found >> checking for rlim_t ... found >> checking for uintptr_t ... uintptr_t found >> checking for system endianess ... little endianess >> checking for size_t size ... 8 bytes >> checking for off_t size ... 8 bytes >> checking for time_t size ... 8 bytes >> checking for AF_INET6 ... found >> checking for setproctitle() ... not found >> checking for pread() ... found >> checking for pwrite() ... found >> checking for sys_nerr ... found >> checking for localtime_r() ... found >> checking for posix_memalign() ... found >> checking for memalign() ... found >> checking for mmap(MAP_ANON|MAP_SHARED) ... found >> checking for mmap("/dev/zero", MAP_SHARED) ... found >> checking for System V shared memory ... found >> checking for struct msghdr.msg_control ... found >> checking for ioctl(FIONBIO) ... found >> checking for struct tm.tm_gmtoff ... found >> checking for struct dirent.d_namlen ... not found >> checking for struct dirent.d_type ... found >> >> Configuration summary >> + using system PCRE library >> + using system OpenSSL library >> + md5: using OpenSSL library >> + sha1 library is not used >> + using system zlib library >> >> nginx path prefix: "/usr/share/nginx" >> nginx binary file: "/usr/sbin/nginx" >> nginx configuration prefix: "/etc/nginx" >> nginx configuration file: "/etc/nginx/nginx.conf" >> nginx pid file: "/var/run/nginx.pid" >> nginx error log file: "/var/log/nginx/error.log" >> nginx http access log file: "/var/log/nginx/access.log" >> nginx http client request body temporary files: >> "/var/lib/nginx/tmp/client_body" >> nginx http proxy temporary files: "/var/lib/nginx/tmp/proxy" >> nginx http fastcgi temporary files: "/var/lib/nginx/tmp/fastcgi" >> nginx http uwsgi temporary files: "/var/lib/nginx/tmp/uwsgi" >> nginx http scgi temporary files: "/var/lib/nginx/tmp/scgi" >> >> *Make:* >> make -f objs/Makefile >> make[1]: Entering directory `/root/nginx-1.0.0' >> gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter >> -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe >> -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector >> --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I >> src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ >> -o objs/src/core/nginx.o \ >> src/core/nginx.c >> cc1: warnings being treated as errors >> src/core/nginx.c: In function ?ngx_set_user?: >> src/core/nginx.c:1105: warning: unused parameter ?cmd? >> src/core/nginx.c: In function ?ngx_set_env?: >> src/core/nginx.c:1168: warning: unused parameter ?cmd? >> src/core/nginx.c: In function ?ngx_set_priority?: >> src/core/nginx.c:1198: warning: unused parameter ?cmd? >> src/core/nginx.c: In function ?ngx_set_cpu_affinity?: >> src/core/nginx.c:1238: warning: unused parameter ?cmd? >> make[1]: *** [objs/src/core/nginx.o] Error 1 >> make[1]: Leaving directory `/root/nginx-1.0.0' >> make: *** [build] Error 2 >> >> >> *Sincerely yours* >> *--* >> *Iqbal Aroussi* >> *+212 665 025 032* >> *iqbal at aroussi.name* >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 27 15:03:12 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Feb 2012 19:03:12 +0400 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: References: Message-ID: <20120227150312.GJ67687@mdounin.ru> Hello! On Mon, Feb 27, 2012 at 03:19:38PM +0100, Gr?gory Pakosz wrote: > > > > The error_page directives are inherited if and only if there is absolutely > > NO error_page directive on the current level. Moreover, whenever you use > > the error_page directive you are doing two things: 1) explicitly setting > > error pages for the specified error codes 2) implicitly resetting error > > pages for all the other error codes that are not explicitly set on the > > current level to their default values So your "error_page 418 > > http://nginx.org;" directive not only set the error page for error code > > 418, but also reset the error pages for error code 404 and all the other > > error codes to their default values. Max > > > > Hi Max, > > Thank you for your answer. May I suggest that explanation enters the wiki > in the error_page directive section? http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page : These directives are inherited from the previous level if and : only if there are no error_page directives on the current level. Maxim Dounin From mdounin at mdounin.ru Mon Feb 27 15:11:47 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Feb 2012 19:11:47 +0400 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: Message-ID: <20120227151147.GK67687@mdounin.ru> Hello! On Mon, Feb 27, 2012 at 02:48:06PM +0000, Iqbal Aroussi wrote: > Hi, > > Can you please help me overcome this problem. I'm trying to compile > Nginx + nginx_udplog_module > from source but I get this error: *cc1: warnings being treated as errors* > I tried with *nginx-1.0.0* the version in our production servers and * > nginx-1.0.12* the latest stable version. [...] > --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 You shoot yourself in the foot by using --with-cc-opt="... -Wall ...". This results in: > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ > -o objs/src/core/nginx.o \ > src/core/nginx.c I.e. warning options set by nginx (notably "-Wall ... -Wno-unused-parameter") are overriden by your "-Wall" which comes later, and this results in compilation failure due to "-Werror" also set by nginx. Not passing "-Wall" by hand will fix this while still ensure that warnings unexpected in the nginx source code will be fatal. Maxim Dounin From edho at myconan.net Mon Feb 27 15:16:25 2012 From: edho at myconan.net (Edho Arief) Date: Mon, 27 Feb 2012 22:16:25 +0700 Subject: Compilation error on CentOS-5.7 In-Reply-To: <20120227151147.GK67687@mdounin.ru> References: <20120227151147.GK67687@mdounin.ru> Message-ID: 2012/2/27 Maxim Dounin : > >> --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 >> -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 >> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 >> -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 > > You shoot yourself in the foot by using --with-cc-opt="... -Wall > ...". ?This results in: > For few seconds I thought this was error report from gentooland. From iqbal at aroussi.name Mon Feb 27 15:28:00 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 15:28:00 +0000 Subject: Compilation error on CentOS-5.7 In-Reply-To: <20120227151147.GK67687@mdounin.ru> References: <20120227151147.GK67687@mdounin.ru> Message-ID: Hi Maxim, Thanks for your reply. Actually as I was asked to ad support for the *nginx_udplog_module* module I copied the configuration options from *nginx -V* output. I really appreciate your advise and I'll give it a try right away. Best Regards nginx -V nginx: nginx version: nginx/1.0.0 nginx: built by gcc 4.1.2 20080704 (Red Hat 4.1.2-50) nginx: TLS SNI support disabled nginx: configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-file-aio --with-mail_ssl_module --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' * -- * *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* On Mon, Feb 27, 2012 at 15:11, Maxim Dounin wrote: > Hello! > > On Mon, Feb 27, 2012 at 02:48:06PM +0000, Iqbal Aroussi wrote: > > > Hi, > > > > Can you please help me overcome this problem. I'm trying to compile > > Nginx + nginx_udplog_module > > from source but I get this error: *cc1: warnings being treated as errors* > > I tried with *nginx-1.0.0* the version in our production servers and * > > nginx-1.0.12* the latest stable version. > > [...] > > > --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > > -mtune=generic' --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 > > -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 > > -mtune=generic' --add-module=/root/nginx_udplog_module-1.0.0 > > You shoot yourself in the foot by using --with-cc-opt="... -Wall > ...". This results in: > > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g > -pipe > > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I > > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs \ > > -o objs/src/core/nginx.o \ > > src/core/nginx.c > > I.e. warning options set by nginx (notably "-Wall ... > -Wno-unused-parameter") are overriden by your "-Wall" which comes > later, and this results in compilation failure due to "-Werror" > also set by nginx. > > Not passing "-Wall" by hand will fix this while still ensure that > warnings unexpected in the nginx source code will be fatal. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iqbal at aroussi.name Mon Feb 27 15:41:09 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 15:41:09 +0000 Subject: Compilation error on CentOS-5.7 In-Reply-To: <20120227151147.GK67687@mdounin.ru> References: <20120227151147.GK67687@mdounin.ru> Message-ID: Hi Maxim, Thanks a lot for your expertise. I removed the *-Wall* option and everything worked perfectly, without the need to remove *-Werror* Best Regards. Really appreciate your support. *--* *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* On Mon, Feb 27, 2012 at 15:11, Maxim Dounin wrote: > I.e. warning options set by nginx (notably "-Wall ... > -Wno-unused-parameter") are overriden by your "-Wall" which comes > later, and this results in compilation failure due to "-Werror" > also set by nginx. > > Not passing "-Wall" by hand will fix this while still ensure that > warnings unexpected in the nginx source code will be fatal. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iqbal at aroussi.name Mon Feb 27 15:51:09 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 15:51:09 +0000 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: <20120227151147.GK67687@mdounin.ru> Message-ID: Hi, I was able to compile nginx alone without problems. however it doesn't compile when I add *nginx_udplog_module-1.0.0* support "* --add-module=/root/nginx_udplog_module-1.0.0*" any hints ? Best Regards gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/http/modules/perl -I src/mail \ -o objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o \ /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function ?ngx_udplog_init_endpoint?: /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:284: error: incompatible types in assignment /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function ?ngx_http_udplogger_send?: /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: invalid type argument of ?->? /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: incompatible type for argument 2 of ?ngx_log_error_core? make[1]: *** [objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o] Error 1 make[1]: Leaving directory `/root/nginx-1.0.12' make: *** [build] Error 2 * -- * *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* On Mon, Feb 27, 2012 at 15:41, Iqbal Aroussi wrote: > Hi Maxim, > > Thanks a lot for your expertise. I removed the *-Wall* option and > everything worked perfectly, without the need to remove *-Werror* > > Best Regards. > Really appreciate your support. > > *--* > *Iqbal Aroussi* > *+212 665 025 032* > *iqbal at aroussi.name* > > > > > > On Mon, Feb 27, 2012 at 15:11, Maxim Dounin wrote: > >> I.e. warning options set by nginx (notably "-Wall ... >> -Wno-unused-parameter") are overriden by your "-Wall" which comes >> later, and this results in compilation failure due to "-Werror" >> also set by nginx. >> >> Not passing "-Wall" by hand will fix this while still ensure that >> warnings unexpected in the nginx source code will be fatal. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpakosz at yahoo.fr Mon Feb 27 16:01:12 2012 From: gpakosz at yahoo.fr (=?ISO-8859-1?Q?Gr=E9gory_Pakosz?=) Date: Mon, 27 Feb 2012 17:01:12 +0100 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: <20120227150312.GJ67687@mdounin.ru> References: <20120227150312.GJ67687@mdounin.ru> Message-ID: > > > > > > Thank you for your answer. May I suggest that explanation enters the wiki > > in the error_page directive section? > > http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page > > : These directives are inherited from the previous level if and > : only if there are no error_page directives on the current level. > > Fair enough! I'll refer to the doc more often then :) Thank you for the help Gregory -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuzhaoyuan at gmail.com Mon Feb 27 16:10:55 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Tue, 28 Feb 2012 00:10:55 +0800 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: <20120227151147.GK67687@mdounin.ru> Message-ID: Hi, On Mon, Feb 27, 2012 at 11:51 PM, Iqbal Aroussi wrote: > Hi, > > I was able to compile nginx alone without problems. however it doesn't > compile when I add *nginx_udplog_module-1.0.0* support "* > --add-module=/root/nginx_udplog_module-1.0.0*" > any hints ? > > Best Regards > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I > src/http -I src/http/modules -I src/http/modules/perl -I src/mail \ > -o objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o \ > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > ?ngx_udplog_init_endpoint?: > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:284: error: > incompatible types in assignment > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > ?ngx_http_udplogger_send?: > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > invalid type argument of ?->? > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > incompatible type for argument 2 of ?ngx_log_error_core? > make[1]: *** > [objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o] Error 1 > make[1]: Leaving directory `/root/nginx-1.0.12' > make: *** [build] Error 2 > If you need the syslog feature, maybe you can give Tengine ( http://tengine.taobao.org) a try? It is a nginx distribution with native syslog support: http://tengine.taobao.org/document/http_log.html Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 27 16:15:37 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Feb 2012 20:15:37 +0400 Subject: Compilation error on CentOS-5.7 In-Reply-To: References: <20120227151147.GK67687@mdounin.ru> Message-ID: <20120227161537.GM67687@mdounin.ru> Hello! On Mon, Feb 27, 2012 at 03:51:09PM +0000, Iqbal Aroussi wrote: > Hi, > > I was able to compile nginx alone without problems. however it doesn't > compile when I add *nginx_udplog_module-1.0.0* support "* > --add-module=/root/nginx_udplog_module-1.0.0*" > any hints ? > > Best Regards > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event -I > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I > src/http -I src/http/modules -I src/http/modules/perl -I src/mail \ > -o objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o \ > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > ?ngx_udplog_init_endpoint?: > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:284: error: > incompatible types in assignment > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > ?ngx_http_udplogger_send?: > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > invalid type argument of ?->? > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > incompatible type for argument 2 of ?ngx_log_error_core? > make[1]: *** > [objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o] Error 1 > make[1]: Leaving directory `/root/nginx-1.0.12' > make: *** [build] Error 2 It looks like the udplog module version you are using is way too old and doesn't contain this 2 years old fix for and API change in nginx 0.8.32: https://github.com/vkholodkov/nginx-udplog-module/commit/e7b8145b96ae9f5be94f0ab0008a8c59cd13c7b8 You may want to get more recent one from Valery's github. Maxim Dounin From iqbal at aroussi.name Mon Feb 27 16:25:25 2012 From: iqbal at aroussi.name (Iqbal Aroussi) Date: Mon, 27 Feb 2012 16:25:25 +0000 Subject: Compilation error on CentOS-5.7 In-Reply-To: <20120227161537.GM67687@mdounin.ru> References: <20120227151147.GK67687@mdounin.ru> <20120227161537.GM67687@mdounin.ru> Message-ID: Hi Maxim, it worked great, you saved my day you're a great guy. Bestest Regards * -- * *Iqbal Aroussi* *+212 665 025 032* *iqbal at aroussi.name* On Mon, Feb 27, 2012 at 16:15, Maxim Dounin wrote: > Hello! > > On Mon, Feb 27, 2012 at 03:51:09PM +0000, Iqbal Aroussi wrote: > > > Hi, > > > > I was able to compile nginx alone without problems. however it doesn't > > compile when I add *nginx_udplog_module-1.0.0* support "* > > --add-module=/root/nginx_udplog_module-1.0.0*" > > any hints ? > > > > Best Regards > > > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter > > -Wunused-function -Wunused-variable -Wunused-value -Werror -g -O2 -g > -pipe > > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > > --param=ssp-buffer-size=4 -m64 -mtune=generic -I src/core -I src/event > -I > > src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I > > src/http -I src/http/modules -I src/http/modules/perl -I src/mail \ > > -o objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o \ > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > > ?ngx_udplog_init_endpoint?: > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:284: error: > > incompatible types in assignment > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c: In function > > ?ngx_http_udplogger_send?: > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > > invalid type argument of ?->? > > /root/nginx_udplog_module-1.0.0/ngx_http_udplog_module.c:338: error: > > incompatible type for argument 2 of ?ngx_log_error_core? > > make[1]: *** > > [objs/addon/nginx_udplog_module-1.0.0/ngx_http_udplog_module.o] Error 1 > > make[1]: Leaving directory `/root/nginx-1.0.12' > > make: *** [build] Error 2 > > It looks like the udplog module version you are using is way too > old and doesn't contain this 2 years old fix for and API change > in nginx 0.8.32: > > > https://github.com/vkholodkov/nginx-udplog-module/commit/e7b8145b96ae9f5be94f0ab0008a8c59cd13c7b8 > > You may want to get more recent one from Valery's github. > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thuermer at veeseo.com Mon Feb 27 17:45:16 2012 From: thuermer at veeseo.com (Thorben Thuermer) Date: Mon, 27 Feb 2012 18:45:16 +0100 Subject: undoing rewrite (for proxy_pass) Message-ID: <20120227184516.1d095b35d6fb1f6ec02baf61@veeseo.com> Hello, i recently tried to set up nginx as reverse-proxy in front of apache to offload serving of cached static versions of pages, where i needed to apply a rewrite to construct the local path of the cache files: * receive incoming requests * apply a rewrite-rule, use try_files to serve files if they exist * otherwise proxy_pass the original request to apache the problem i ran into was, that i needed to pass the original unprocessed request to apache, as it may contain script parameters that get broken by processing. (the best reference to this issue that i found is this: http://mailman.nginx.org/pipermail/nginx/2010-April/019905.html ) as i do not even need to forward a rewritten version, this solution should apply: "If it is necessary to transmit URI in the unprocessed form then directive proxy_pass should be used without URI part:" -- http://wiki.nginx.org/NginxHttpProxyModule#proxy_pass but this does not work anymore after a rewrite was applied, as the rewritten version will be transmitted. the natural method to undo a rewrite would appear to be: rewrite .* $request_uri break; but this replaces the request with a processed version! i found the code for constructing the request in: src/http/modules/ngx_http_proxy_module.c:ngx_http_proxy_create_request the if-block starting at line ~910 appears to pick the request-uri to be used in the request. there the problem appears to be that r->valid_unparsed_uri is no longer set after rewrites were applied, so it will never choose to use the unparsed uri! i simply ignored r->valid_unparsed_uri, and replaced the whole block with: unparsed_uri = 1; uri_len = r->unparsed_uri.len; and now my reverse-proxying works as required. (ofcourse i broke the other cases by hardcoding this, but that's not an issue for me.) i am wondering if there might be interest in including a proper solution for this situation in nginx. (or maybe there is one already and i missed it?) Greetings, Thorben Thuermer VSEO GmbH PS1: when working on this, i spent most time on finding out how to get nginx running in gdb - is there some introduction to development documentation where such information might be found/useful? PS2: before anybody asks, the configuration: server { listen 1.2.3.4:80; server_name ...; root /var/www/...; location / { rewrite "^(lots of stuff here)$" /cache/(lots of submatches) break; try_files $uri $uri/ @apache; } location @apache { rewrite ^(.*)$ $request_uri break; # useless proxy_pass http://localhost; } } From cliff at develix.com Mon Feb 27 17:52:10 2012 From: cliff at develix.com (Cliff Wells) Date: Mon, 27 Feb 2012 09:52:10 -0800 Subject: Reverse proxy, subdomain In-Reply-To: References: Message-ID: <1330365130.16449.48.camel@portable-evil> On Mon, 2012-02-27 at 12:19 +0100, vicosoft at gmail.com wrote: > ok, I've solved using the IF statement. if ($http_host = m.mua.es) { No, this is a bad way to solve it. Please see http://wiki.nginx.org/IfIsEvil Instead, set up a separate server block for that subdomain: server { server_name m.mua.es; ... } If you want to avoid repeating large blocks of configuration across multiple server blocks, use the include directive. Regards, Cliff From nginx-forum at nginx.us Mon Feb 27 19:29:20 2012 From: nginx-forum at nginx.us (edo888) Date: Mon, 27 Feb 2012 14:29:20 -0500 (EST) Subject: How to make a post subrequest with response body from parent?. In-Reply-To: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> References: <96bbcc7925332036102ec938be92b73c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0534fc2ad9942094fe99c85473aa23fe.NginxMailingListEnglish@forum.nginx.org> Hi, Just in case someone will find this in google. I have found a solution. You will need to populate sr->request_body->bufs and then make sure to set sr->headers_in.content_length_n... You can check the http echo module subrequest.c file and see how it sets the content length. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222983,223071#msg-223071 From nginx-forum at nginx.us Mon Feb 27 19:39:06 2012 From: nginx-forum at nginx.us (nginx_dig) Date: Mon, 27 Feb 2012 14:39:06 -0500 (EST) Subject: Problem with fpm php (wrong script location ?!) Message-ID: <2194454aa24054fa41134331feaf7838.NginxMailingListEnglish@forum.nginx.org> Hi, first of all: i am fairly new to nginx (used apache for several years now).... Works great, except fpm php... In the last 24h i have read many tutorials, forum posts and howtos... First there was no error message at all..just a blank page (many others were getting that too). Now after some changes i get an error message (horray! :D ), but i have no clue how to fix this. 2012/02/27 20:23:10 [error] 26226#0: *1 open() "/usr/local/etc/nginx/html/info.php" failed (2: No such file or directory), client That is not the normal document root. All normal (html ,text e.g.) files are beeing delivered just fine out of my document root. But why is he looking at another directory to geht this php script ?! I tried many variants of " fastcgi_param SCRIPT_FILENAME /websites/some.domain/http$fastcgi_script_name;" even with document_root$fastcgi_script_name ... Nothing worked. Any ideas ho to change the looking directory to the document root ? The php.info is exisitng in there. My relevant config data: in my vhost config file: ######## server { listen 256.256.256.256:80; #just an example ip server_name .some.domain; location / { root /websites/some.domain/http; index index.php index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } location ~ /\.ht { deny all; } location ~ \.php$ { root html; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /websites/some.domain/http$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; if (-f $request_filename) { # Only throw it at PHP-FPM if the file exists (prevents some PHP exploits) fastcgi_pass backend_php; # The upstream determined above } } } ###### in nginx.conf: user www www; worker_processes 1; # main server error log error_log /var/log/nginx/error.log ; pid /var/run/nginx.pid; events { worker_connections 1024; } # main server config http { server_names_hash_bucket_size 64; include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; server { listen 256.256.256.256:80 default; server_name _; access_log /var/log/nginx/access.log main; # server_name_in_redirect off; location / { index index.html; root /usr/local/www/nginx; } } # virtual hosting include /usr/local/etc/nginx/vhosts/*; upstream backend_php { server 127.0.0.1:9000; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223072,223072#msg-223072 From nginx-forum at nginx.us Mon Feb 27 20:03:53 2012 From: nginx-forum at nginx.us (artemg) Date: Mon, 27 Feb 2012 15:03:53 -0500 (EST) Subject: What is the reason ngx_http_fastcgi_module doesn't support subrequest_in_memory flag? Message-ID: <09972af1baaaa799cda4cd876fced8d0.NginxMailingListEnglish@forum.nginx.org> What is the reason ngx_http_fastcgi_module doesn't support subrequest_in_memory flag? What is the best way to do the same this for fastcgi? Set body filter and then empty buffers in incoming chains? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223073,223073#msg-223073 From nginx-forum at nginx.us Mon Feb 27 20:05:15 2012 From: nginx-forum at nginx.us (nginx_dig) Date: Mon, 27 Feb 2012 15:05:15 -0500 (EST) Subject: Problem with fpm php (wrong script location ?!) In-Reply-To: <2194454aa24054fa41134331feaf7838.NginxMailingListEnglish@forum.nginx.org> References: <2194454aa24054fa41134331feaf7838.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9186ff219c11e82584267f5ffe51b846.NginxMailingListEnglish@forum.nginx.org> Just another addition: I tried to set the root parameter again to an absolut value... No i am getting an empty page again when calling the php script. (access log shows a normal 200) :/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223072,223074#msg-223074 From nginx-forum at nginx.us Mon Feb 27 20:26:04 2012 From: nginx-forum at nginx.us (locojohn) Date: Mon, 27 Feb 2012 15:26:04 -0500 (EST) Subject: Problem with fpm php (wrong script location ?!) In-Reply-To: <2194454aa24054fa41134331feaf7838.NginxMailingListEnglish@forum.nginx.org> References: <2194454aa24054fa41134331feaf7838.NginxMailingListEnglish@forum.nginx.org> Message-ID: > location / { > root /websites/some.domain/http; > index index.php index.html index.htm; > } Try to move this main root out of location / {} block to the server {} block: server { server_name ...; root ...; } Also, root should ALWAYS point to an absolute path. http://nginx.org/en/docs/http/ngx_http_core_module.html#root Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223072,223077#msg-223077 From mdounin at mdounin.ru Mon Feb 27 21:13:27 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2012 01:13:27 +0400 Subject: undoing rewrite (for proxy_pass) In-Reply-To: <20120227184516.1d095b35d6fb1f6ec02baf61@veeseo.com> References: <20120227184516.1d095b35d6fb1f6ec02baf61@veeseo.com> Message-ID: <20120227211327.GP67687@mdounin.ru> Hello! On Mon, Feb 27, 2012 at 06:45:16PM +0100, Thorben Thuermer wrote: > Hello, > > i recently tried to set up nginx as reverse-proxy in front of apache > to offload serving of cached static versions of pages, > where i needed to apply a rewrite to construct the local path of the > cache files: > * receive incoming requests > * apply a rewrite-rule, use try_files to serve files if they exist > * otherwise proxy_pass the original request to apache > > the problem i ran into was, that i needed to pass the original unprocessed > request to apache, as it may contain script parameters that get broken by > processing. > (the best reference to this issue that i found is this: > http://mailman.nginx.org/pipermail/nginx/2010-April/019905.html ) > > as i do not even need to forward a rewritten version, > this solution should apply: > "If it is necessary to transmit URI in the unprocessed form then directive > proxy_pass should be used without URI part:" > -- http://wiki.nginx.org/NginxHttpProxyModule#proxy_pass > > but this does not work anymore after a rewrite was applied, as the > rewritten version will be transmitted. > the natural method to undo a rewrite would appear to be: > rewrite .* $request_uri break; > but this replaces the request with a processed version! > > i found the code for constructing the request in: > src/http/modules/ngx_http_proxy_module.c:ngx_http_proxy_create_request > the if-block starting at line ~910 appears to pick the request-uri > to be used in the request. > there the problem appears to be that r->valid_unparsed_uri is no longer > set after rewrites were applied, so it will never choose to use the > unparsed uri! > > i simply ignored r->valid_unparsed_uri, and replaced the whole block with: > unparsed_uri = 1; > uri_len = r->unparsed_uri.len; > and now my reverse-proxying works as required. > (ofcourse i broke the other cases by hardcoding this, but that's not an > issue for me.) > > > i am wondering if there might be interest in including a proper solution > for this situation in nginx. > (or maybe there is one already and i missed it?) Something like this should do the trick: proxy_pass http://backend$request_uri; Though it has an additional side-effect of using resolver for a "backend" at run-time if it's not an ip address nor a name of an upstream{} block. Maxim Dounin From luci at Conexim.com.au Mon Feb 27 22:49:38 2012 From: luci at Conexim.com.au (Lucian D. Kafka) Date: Mon, 27 Feb 2012 22:49:38 +0000 Subject: How to log virtual server name In-Reply-To: <20120227062239.GA24213@nginx.com> References: <20120227062239.GA24213@nginx.com> Message-ID: Hi Igor, thank you very much for your reply. The _problem_ is that Nginx does not behave as the documentation describes. Entering multiple server names in one server_name directive and using $http_host do not work. When I enter more than 1 server name I get a warning message: Restarting nginx: nginx: [warn] conflicting server name "www.xxx.com" on x.x.x.x:80, ignored (there is only one mention of this server name anywhere - I have grepped all config files looking for a runaway). In terms of logging, $http_host (and all other variables I tried to get a host name out of) is _empty_ unless there is a server_name match on 'certain' one name in the server_name list... What could cause $http_host to loose its value even if the server_name does not populate correctly? I am using nginx version: nginx/1.0.11. I will try to see if I can pinpoint the issue further... Cheers, Luci -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Igor Sysoev Sent: Monday, 27 February 2012 5:23 PM To: nginx at nginx.org Subject: Re: How to log virtual server name On Mon, Feb 27, 2012 at 06:18:00AM +0000, Lucian D. Kafka wrote: > Hi All, > > We have a virtual hosting setup where multiple domains are delegated to the same server IP address, and Nginx acts as a caching proxy server in front of the web server. > > I am encountering 2 issues: > > 1. Cannot set more than 1 server name - Nginx is ignoring multiple server names defined on the same IP with a warning message This may help: http://nginx.org/en/docs/http/server_names.html > 2. Cannot log the virtual server name (Apache %v equivalent) to the access_log. Any variable in the custom log format - ie. $server_name, $host, etc does not log the Host headers, but rather the server name string set with the server_name directive (if matched). This makes it impossible to have a combined log file for different sites set in a virtual hosting environment on the same IP address. $http_host. -- Igor Sysoev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Feb 27 23:35:54 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 27 Feb 2012 23:35:54 +0000 Subject: How to log virtual server name In-Reply-To: References: <20120227062239.GA24213@nginx.com> Message-ID: <20120227233554.GA4114@craic.sysops.org> On Mon, Feb 27, 2012 at 10:49:38PM +0000, Lucian D. Kafka wrote: Hi there, > The _problem_ is that Nginx does not behave as the documentation describes. Entering multiple server names in one server_name directive and using $http_host do not work. > The following "http" section of nginx.conf allows me to see content from two different directories, depending on the Host: header in the request; and includes the hostname used in the Host: header in the access_log file. It behaves for me as the documentation describes. What is the difference between this and the configuration file you are using? === http { log_format mine '$host $remote_addr - "$request" $status'; access_log logs/mine.log mine; server { server_name one two; listen 8000; root one; } server { server_name three; listen 8000 default_server; root three; } } === If I use $http_host instead of $host, I see whatever the client sent -- including the :port part. Testing using commands like curl -i http://localhost:8000/ curl -i -H 'Host: one' http://localhost:8000/ curl -i -H 'Host: two:66' http://localhost:8000/ curl -i -H 'Host: three' http://localhost:8000/ shows me the content and the log lines that I expect, as above. So: I'm unable to reproduce the problem you report, using a configuration that seems to match your text. Can you provide a (minimal?) config file that shows the problem for you? Cheers, f -- Francis Daly francis at daoine.org From luci at Conexim.com.au Tue Feb 28 02:27:58 2012 From: luci at Conexim.com.au (Lucian D. Kafka) Date: Tue, 28 Feb 2012 02:27:58 +0000 Subject: How to log virtual server name In-Reply-To: <20120227233554.GA4114@craic.sysops.org> References: <20120227062239.GA24213@nginx.com> <20120227233554.GA4114@craic.sysops.org> Message-ID: Thank you for that Francis. Using your testing battery with different curl headers I was able to narrow down the issue. Basically nginx logs correctly any virtual host name under the sun (specified or not under the server_name) - except one. This is exactly the same one it complains as having a conflict on startup. If I take out this one server name of the server_name directive (ie. this name is not mentioned in any nginx config files), the complaining about conflict stops, but logging for this one host does not work - ie $host (and all other variants) are empty. Cheers, Luci -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Francis Daly Sent: Tuesday, 28 February 2012 10:36 AM To: nginx at nginx.org Subject: Re: How to log virtual server name On Mon, Feb 27, 2012 at 10:49:38PM +0000, Lucian D. Kafka wrote: Hi there, > The _problem_ is that Nginx does not behave as the documentation describes. Entering multiple server names in one server_name directive and using $http_host do not work. > The following "http" section of nginx.conf allows me to see content from two different directories, depending on the Host: header in the request; and includes the hostname used in the Host: header in the access_log file. It behaves for me as the documentation describes. What is the difference between this and the configuration file you are using? === http { log_format mine '$host $remote_addr - "$request" $status'; access_log logs/mine.log mine; server { server_name one two; listen 8000; root one; } server { server_name three; listen 8000 default_server; root three; } } === If I use $http_host instead of $host, I see whatever the client sent -- including the :port part. Testing using commands like curl -i http://localhost:8000/ curl -i -H 'Host: one' http://localhost:8000/ curl -i -H 'Host: two:66' http://localhost:8000/ curl -i -H 'Host: three' http://localhost:8000/ shows me the content and the log lines that I expect, as above. So: I'm unable to reproduce the problem you report, using a configuration that seems to match your text. Can you provide a (minimal?) config file that shows the problem for you? Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginxyz at mail.ru Tue Feb 28 03:47:14 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Tue, 28 Feb 2012 07:47:14 +0400 Subject: Regular Expression global redirect In-Reply-To: <87wr784m9i.wlappa@perusio.net> References: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> <87wr784m9i.wlappa@perusio.net> Message-ID: 27 ??????? 2012, 14:13 ?? Ant?nio P. P. Almeida : > On 27 Fev 2012 07h33 CET, nginxyz at mail.ru wrote: > > 27 ??????? 2012, 04:41 ?? Ant?nio P. P. Almeida : > > > server { > > > listen 80; > > > server_name ~^www\.(?P.*)$; > > > return 301 $scheme://$domain$request_uri; > > > } > > > > > > server { > > > listen 80; > > > server_name ~^(?P[^\.]*)\.(?P[^\.]*)$; > > > location / { > > > proxy_pass http://$domain_name.$tld; > > > } > > > } > > > > > > This should work [1]. > > > > Your solution, while syntactically correct, is wrong by design. > > What you created there is an open anonymizing proxy that will pass > > any request from anyone to any host:port combination that contains > > only the domain name and the TLD, if a functional resolver has been > > set up using the resolver directive. Take a guess what this would > > do: > > This deals with illegal Host headers: > > server { > listen 80 default_server; > server_name _; > server_name_in_redirect off; > return 444; > } If by deals you mean gives a card to every player who wants one, then you are correct. :-P But it does nothing to close that open anonymizing proxy you created with the previous server block, anyone can still use your frontend server as an open anonymizing proxy to access any domain.tld:port they want, including fbi.gov:22. Besides, server_name_in_redirect is off by default. Moreover, it's completely useless in that server block because you're just dropping the connection anyway. This would have been just as useful: proxy_set_header Warning "CPU cycle wasting in progress..."; As for illegal Host headers, nginx takes care of those on its own and returns error code 400 without such blocks. The purpose of such blocks is to catch everything else that is not matched by defined server names. In your case, the other two server blocks already match any requests that have the Host header set to start with www or contain a domain.tld type of hostname, so your latest server block just catches everything else (requests with missing Host headers, IP addresses, nonwwwhostname.domain.tld hostnames etc.). To put it simply - your configuration is wrong and should not be used, unless you want to "deal with" the FBI in the near future. Max From nginxyz at mail.ru Tue Feb 28 03:53:23 2012 From: nginxyz at mail.ru (=?UTF-8?B?TWF4?=) Date: Tue, 28 Feb 2012 07:53:23 +0400 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: References: <20120227150312.GJ67687@mdounin.ru> Message-ID: 27 ??????? 2012, 20:07 ?? Gr?gory Pakosz : > > > > > > Thank you for your answer. May I suggest that explanation enters the wiki > > > in the error_page directive section? > > > > http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page > > > > : These directives are inherited from the previous level if and > > : only if there are no error_page directives on the current level. > > > > Fair enough! > > I'll refer to the doc more often then :) > > Thank you for the help You're welcome. Maxim is right, this is documented, but the implicit reset is not mentioned, so I've updated the wiki to make that clear: http://wiki.nginx.org/HttpCoreModule#error_page Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Feb 28 08:38:39 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 28 Feb 2012 03:38:39 -0500 Subject: BitTorrent tracker announce flooding NGinx logs Message-ID: Hello, I recently installed Transmission 2.03. Since then I get my NGinx error log flooded with messages like "[error] 21475#0: *6383 open() "/var/web/tracker.thepiratebay.org/announce" failed (2: No such file or directory), client: 127.0.0.1, server: ~mydomain\.com$, request: "GET /announce?info_hash=" It seems there is a misconfiguration somewhere. Could you help me on that point? Many thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 28 09:02:45 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2012 13:02:45 +0400 Subject: error_page directive, how does context affect error handling behavior? In-Reply-To: References: <20120227150312.GJ67687@mdounin.ru> Message-ID: <20120228090245.GU67687@mdounin.ru> Hello! On Tue, Feb 28, 2012 at 07:53:23AM +0400, Max wrote: > 27 ??????? 2012, 20:07 ?? Gr?gory Pakosz : > > > > > > Thank you for your answer. May I suggest that explanation enters the wiki > > > in the error_page directive section? > > > > http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page > > > > : These directives are inherited from the previous level if and > > : only if there are no error_page directives on the current level. > > > > Fair enough! > > I'll refer to the doc more often then :) > > Thank you for the help You're welcome. Maxim is right, this is documented, but the implicit reset is not mentioned, so I've updated the wiki to make that clear: http://wiki.nginx.org/HttpCoreModule#error_page Max There is no "implicit reset", it's just a result of no inheritance. I don't think that such a detailed explanation of how inheritance work is a good idea in context of error_page directive documentation. If you think this often causes serious misunderstanding, you may want to place the detail description somewhere at http://wiki.nginx.org/Pitfalls, metioning various array-type directives which follow the same logic. Maxim Dounin p.s. You may want to avoid posting html to the list, especially keeping in mind that text parts of your html messages are hardly readable due to no line breaks. From agentzh at gmail.com Tue Feb 28 09:04:43 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 28 Feb 2012 17:04:43 +0800 Subject: Slides for my Nginx/OpenResty talk given at the Tech-Club Salon In-Reply-To: References: Message-ID: Hello, folks! I gave an Nginx/OpenResty talk at the Tech-Club Salon held in the beautiful city, Xiamen, in southern China. Here's the slides that I used for my presentation: ??? http://agentzh.org/misc/slides/ngx-openresty-ecosystem/ Just browse the slides in a JavaScript-enabled web browser and use the pageup/pagedown keys on your keyboard to switch pages. This is my first presentation that covers my recent work (mostly around the ngx_lua module) in the last 7 months. Enjoy! -agentzh From thuermer at veeseo.com Tue Feb 28 09:10:51 2012 From: thuermer at veeseo.com (Thorben Thuermer) Date: Tue, 28 Feb 2012 10:10:51 +0100 Subject: BitTorrent tracker announce flooding NGinx logs In-Reply-To: References: Message-ID: <20120228101051.831dec58.thuermer@veeseo.com> On Tue, 28 Feb 2012 03:38:39 -0500 "B.R." wrote: > I recently installed Transmission 2.03. > Since then I get my NGinx error log flooded with messages like "[error] > 21475#0: *6383 open() "/var/web/tracker.thepiratebay.org/announce" failed > (2: No such file or directory), client: 127.0.0.1, server: ~mydomain\.com$, > request: "GET /announce?info_hash=" > > It seems there is a misconfiguration somewhere. Could you help me on that > point? the misconfiguration is here: $ host tracker.thepiratebay.org tracker.thepiratebay.org has address 127.0.0.1 it's typical for disabled trackers to get pointed to localhost (instead of pulled from dns completely). nothing you can do about that really, at best some workaround to stop those requests from ending up in your log. (it's generally a good idea to set up a dummy default virtualhost that is not used in production just so all the spam ends up in it's log.) - T. From appa at perusio.net Tue Feb 28 12:10:19 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Tue, 28 Feb 2012 13:10:19 +0100 Subject: Regular Expression global redirect In-Reply-To: References: <6373aa5958785722e8c4ee5d421fe450.NginxMailingListEnglish@forum.nginx.org> <87wr784m9i.wlappa@perusio.net> Message-ID: <87399vma50.wl%appa@perusio.net> On 28 Fev 2012 04h47 CET, nginxyz at mail.ru wrote: > > 27 ??????? 2012, 14:13 ?? Ant?nio P. P. Almeida : >> On 27 Fev 2012 07h33 CET, nginxyz at mail.ru wrote: >>> 27 ??????? 2012, 04:41 ?? Ant?nio P. P. Almeida >> perusio.net>: >>>> server { >>>> listen 80; >>>> server_name ~^www\.(?P.*)$; >>>> return 301 $scheme://$domain$request_uri; >>>> } >>>> >>>> server { >>>> listen 80; >>>> server_name ~^(?P[^\.]*)\.(?P[^\.]*)$; >>>> location / { >>>> proxy_pass http://$domain_name.$tld; >>>> } >>>> } >>>> >>>> This should work [1]. >>> >>> Your solution, while syntactically correct, is wrong by design. >>> What you created there is an open anonymizing proxy that will pass >>> any request from anyone to any host:port combination that contains >>> only the domain name and the TLD, if a functional resolver has >>> been set up using the resolver directive. Take a guess what this >>> would do: >> >> This deals with illegal Host headers: >> >> server { >> listen 80 default_server; >> server_name _; >> server_name_in_redirect off; >> return 444; >> } > > If by deals you mean gives a card to every player who wants one, > then you are correct. :-P But it does nothing to close that open > anonymizing proxy you created with the previous server block, > anyone can still use your frontend server as an open anonymizing > proxy to access any domain.tld:port they want, including fbi.gov:22. > > Besides, server_name_in_redirect is off by default. Moreover, > it's completely useless in that server block because you're just > dropping the connection anyway. This would have been just > as useful: That was set to off by default in 0.8.48. > proxy_set_header Warning "CPU cycle wasting in progress..."; > > As for illegal Host headers, nginx takes care of those on its > own and returns error code 400 without such blocks. The > purpose of such blocks is to catch everything else that is not > matched by defined server names. In your case, the other two > server blocks already match any requests that have the Host > header set to start with www or contain a domain.tld type > of hostname, so your latest server block just catches everything > else (requests with missing Host headers, IP addresses, > nonwwwhostname.domain.tld hostnames etc.). Illegal in the sense of being relative to undefined/unauthorized hosts. That's what I meant. I use a similar vhost in all my setups. > To put it simply - your configuration is wrong and should not > be used, unless you want to "deal with" the FBI in the near > future. 1. The OP didn't request anything like you said. 2. If he requested such, that could have been dealt with using a simple map with hostnames and an if at the server level. 2. IIRC he hasn't said how his exact setup works. He could have in place network policies that disable the usage of the servers as open proxies. 3. You're just trolling. Like you trolled other people before me. People that have been working on Nginx for quite some time, and that have real accomplishements, besides trolling and posing as "experts". 4. I won't engage you ever again. My mistake. HAND, --- appa From quintinpar at gmail.com Tue Feb 28 12:35:51 2012 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 28 Feb 2012 18:05:51 +0530 Subject: =?UTF-8?Q?Re=3A_Sharing_rate_limiting_data_between_multiple_nginx_LB?= =?UTF-8?Q?=E2=80=99s?= In-Reply-To: References: Message-ID: Hi, Bumping up an old thread. Can someone please help me with this? -Quintin On Tue, Feb 14, 2012 at 7:38 AM, Quintin Par wrote: > Can someone help please... > -Quintin > > On Mon, Feb 13, 2012 at 1:11 PM, Quintin Par wrote: > >> I have multiple nginx machines running and proxy LB through a round robin >> DNS mechanism. >> >> I do rate limiting as follows >> >> limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; >> >> location / { >> >> limit_req zone=pw burst=5 nodelay; >> >> But this is per machine. Can this data be shared between the load >> balancers so that rate limiting is global and I can scale out. >> >> -Quintin >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Feb 28 12:58:31 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2012 16:58:31 +0400 Subject: =?UTF-8?Q?Re=3A_Sharing_rate_limiting_data_between_multiple_nginx_LB?= =?UTF-8?Q?=E2=80=99s?= In-Reply-To: References: Message-ID: <20120228125831.GX67687@mdounin.ru> Hello! On Tue, Feb 28, 2012 at 06:05:51PM +0530, Quintin Par wrote: > Hi, > > Bumping up an old thread. Can someone please help me with this? There is no good solution. Simpliest one is to just use per-frontend limits. Maxim Dounin > > -Quintin > > On Tue, Feb 14, 2012 at 7:38 AM, Quintin Par wrote: > > > Can someone help please... > > -Quintin > > > > On Mon, Feb 13, 2012 at 1:11 PM, Quintin Par wrote: > > > >> I have multiple nginx machines running and proxy LB through a round robin > >> DNS mechanism. > >> > >> I do rate limiting as follows > >> > >> limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; > >> > >> location / { > >> > >> limit_req zone=pw burst=5 nodelay; > >> > >> But this is per machine. Can this data be shared between the load > >> balancers so that rate limiting is global and I can scale out. > >> > >> -Quintin > >> > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Feb 28 13:09:05 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2012 17:09:05 +0400 Subject: limit_req does not work In-Reply-To: <2445d1adf885d741a5e6f2924f386abd.NginxMailingListEnglish@forum.nginx.org> References: <2445d1adf885d741a5e6f2924f386abd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120228130905.GY67687@mdounin.ru> Hello! On Wed, Feb 22, 2012 at 03:16:15PM -0500, double wrote: > Nginx 1.1.15 does not block request if I use "try_files". This is my > markup (simplified): > > nginx.conf: > http { > limit_req_zone $binary_remote_addr zone=zone1:32m rate=2r/s; > limit_req_zone $binary_remote_addr zone=zone2:32m rate=12r/m; > server { > location @backend { > limit_req zone=zone1 burst=10; > limit_req zone=zone2 burst=100 nodelay; > [...] > fastcgi_pass > unix:/var/run/fastcgi/dispatch.sock; > [...] > } > location / { > try_files $uri @backend; > expires max; > } > } > } You may want to test again, the above works without any problems here. Maxim Dounin From piotr.sikora at frickle.com Tue Feb 28 13:42:27 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 28 Feb 2012 14:42:27 +0100 Subject: [ANNOUNCE] ngx_slowfs_cache-1.7 Message-ID: <1891647F35E543EC997C88B55C97AD6E@Desktop> Version 1.7 is now available at: http://labs.frickle.com/nginx_ngx_slowfs_cache/ GitHub repository is available at: http://github.com/FRiCKLE/ngx_slowfs_cache/ Changes: 2012-02-28 VERSION 1.7 * Fix on-disk cache size calculation. Since the initial release, recorded on-disk cache size was decreased twice for purged content (once during cache purge and once during subsequent cache update). This resulted in recorded on-disk cache size being much smaller than in reality, which could lead to on-disk cache outgrowing defined "max_size" parameter. Patch from Pyry Hakulinen (via ngx_cache_purge, months ago). * Append path of the file being cached to the slowfs process title. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From zhuzhaoyuan at gmail.com Tue Feb 28 16:12:25 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Wed, 29 Feb 2012 00:12:25 +0800 Subject: [ANNOUNCE] Tengine-1.2.3 Message-ID: Hi guys, We're glad to announce that Tengine-1.2.3 has been released. You can download it here: http://tengine.taobao.org/download/tengine-1.2.3.tar.gz Starting from this release, Tengine is based on Nginx-1.0.12. Other changes include: * Feature: adds the 'request_time_cache' directive, to get more precise $request_time/$request_time_msec/$request_time_usec. * Feature: adds the 'slice' module to get a part of a static file. * Change: deletes unused browser detection. * Bugfix: fixes a bug in upstream when reading header. * Bugfix: fixes a bug in 'expires_by_types'. OT: Apache 2.4 is getting very hot these days. Some Apache guy even claimed it's faster than Nginx. So I ran a simple benchmark. It turned out Nginx was still the winner. More detailed information: http://blog.zhuzhaoyuan.com/2012/02/apache-24-faster-than-nginx/ Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Feb 28 16:26:39 2012 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 28 Feb 2012 11:26:39 -0500 Subject: BitTorrent tracker announce flooding NGinx logs In-Reply-To: <20120228101051.831dec58.thuermer@veeseo.com> References: <20120228101051.831dec58.thuermer@veeseo.com> Message-ID: OK I can also use my hosts file to redirect the old tracker to a new tracker :) Thanks for the help, I am relieved: I thought I didn't do things properly C ya --- *B. R.* On Tue, Feb 28, 2012 at 04:10, Thorben Thuermer wrote: > On Tue, 28 Feb 2012 03:38:39 -0500 "B.R." wrote: > > I recently installed Transmission 2.03. > > Since then I get my NGinx error log flooded with messages like "[error] > > 21475#0: *6383 open() "/var/web/tracker.thepiratebay.org/announce" > failed > > (2: No such file or directory), client: 127.0.0.1, server: > ~mydomain\.com$, > > request: "GET /announce?info_hash=" > > > > It seems there is a misconfiguration somewhere. Could you help me on that > > point? > > the misconfiguration is here: > $ host tracker.thepiratebay.org > tracker.thepiratebay.org has address 127.0.0.1 > > it's typical for disabled trackers to get pointed to localhost > (instead of pulled from dns completely). > nothing you can do about that really, at best some workaround to stop those > requests from ending up in your log. > (it's generally a good idea to set up a dummy default virtualhost that is > not > used in production just so all the spam ends up in it's log.) > > - T. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.sikora at frickle.com Tue Feb 28 17:23:48 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 28 Feb 2012 18:23:48 +0100 Subject: [ANNOUNCE] ngx_slowfs_cache-1.8 Message-ID: Version 1.8 is now available at: http://labs.frickle.com/nginx_ngx_slowfs_cache/ GitHub repository is available at: http://github.com/FRiCKLE/ngx_slowfs_cache/ Changes: 2012-02-28 VERSION 1.8 * Fix setting of slowfs process title. In case when local path was over 277 characters long, slowfs process would crash, which would result in file not being copied to the cache. Bug had appeared in version 1.7. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From belloni at imavis.com Tue Feb 28 17:27:58 2012 From: belloni at imavis.com (Cristiano Belloni) Date: Tue, 28 Feb 2012 18:27:58 +0100 Subject: writing an alias to a proxy with no buffering Message-ID: <4F4D0E9E.80009@imavis.com> Hi to everybody, beginner here. I have a location in my nginx configuration file that connects to a unix domain proxy to stream a real time video. What I would like is to make other locations point to the stream. It seems that the right option for this is the alias directive. So, I have a configuration like this: server { listen 80; server_name localhost; location /stream { proxy_buffering off; proxy_pass http://unix:/tmp/demo_socket:/; } location /mjpeg { alias /stream; } } but, while the /stream location does the right thing, the /mjpeg location seems to give back a 404: ~$ wget --server-response -qO- "http://192.168.1.88/mjpeg" HTTP/1.1 404 Not Found Server: nginx/1.0.12 Date: Tue, 28 Feb 2012 17:07:30 GMT Content-Type: text/html Content-Length: 169 Connection: keep-alive Do alias directives not work with the proxy module? If they don't, how could I accomplish this? (without redirecting, a lot of streaming clients don't support redirect.) Thank you, Cristiano. -- Belloni Cristiano Imavis Srl. www.imavis.com belloni at imavis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 28 18:29:01 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 28 Feb 2012 18:29:01 +0000 Subject: writing an alias to a proxy with no buffering In-Reply-To: <4F4D0E9E.80009@imavis.com> References: <4F4D0E9E.80009@imavis.com> Message-ID: <20120228182901.GB4114@craic.sysops.org> On Tue, Feb 28, 2012 at 06:27:58PM +0100, Cristiano Belloni wrote: Hi there, > I have a location in my nginx configuration file that connects to a unix > domain proxy to stream a real time video. What I would like is to make > other locations point to the stream. It seems that the right option for > this is the alias directive. "alias" is for static files rather than for proxying. > location /stream { > proxy_buffering off; > proxy_pass http://unix:/tmp/demo_socket:/; > } > > location /mjpeg { > alias /stream; > } Does location /mjpeg { proxy_buffering off; proxy_pass http://unix:/tmp/demo_socket:/; } do what you want? f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Feb 28 18:51:56 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 28 Feb 2012 18:51:56 +0000 Subject: How to log virtual server name In-Reply-To: References: <20120227062239.GA24213@nginx.com> <20120227233554.GA4114@craic.sysops.org> Message-ID: <20120228185156.GC4114@craic.sysops.org> On Tue, Feb 28, 2012 at 02:27:58AM +0000, Lucian D. Kafka wrote: Hi there, > Basically nginx logs correctly any virtual host name under the sun (specified or not under the server_name) - except one. This is exactly the same one it complains as having a conflict on startup. > "conflicting server name" should only appear when nginx has found a (case insensitive) duplicate server_name. Can you provide a (minimal?) config file that shows the problem for you? Cheers, f -- Francis Daly francis at daoine.org From mp3geek at gmail.com Tue Feb 28 21:38:45 2012 From: mp3geek at gmail.com (Ryan Brown) Date: Wed, 29 Feb 2012 10:38:45 +1300 Subject: [ANNOUNCE] Tengine-1.2.3 In-Reply-To: References: Message-ID: > OT: > Apache 2.4 is getting very hot these days. Some Apache guy even claimed it's > faster than Nginx. So I ran a simple benchmark. It turned out Nginx was > still the winner. More detailed information: > http://blog.zhuzhaoyuan.com/2012/02/apache-24-faster-than-nginx/ Does the benchmark change if you use the 1.1.x series? Also I was looking at setting up a Ramdisk (serving static files) does using a Ramdisk increase throughput? From brian at akins.org Tue Feb 28 21:56:00 2012 From: brian at akins.org (Brian Akins) Date: Tue, 28 Feb 2012 16:56:00 -0500 Subject: [ANNOUNCE] Tengine-1.2.3 In-Reply-To: References: Message-ID: On Tue, Feb 28, 2012 at 11:12 AM, Joshua Zhu wrote: > Apache 2.4 is getting very hot these days. Some Apache guy even claimed it's > faster than Nginx. Most of the benchmarks I've seen are suspect. Even the ones that claim nginx is faster. In our use case, it's not even close with nginx vs 2.2. I'll retest with 2.4, obviously. From brad at shub-internet.org Wed Feb 29 02:07:19 2012 From: brad at shub-internet.org (Brad Knowles) Date: Tue, 28 Feb 2012 20:07:19 -0600 Subject: http -> https redirection, with a twist? Message-ID: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> Folks, I've been trying to figure out how to set this up, I've gone through as much of the web site and wiki as I can, and I've searched on the net as much as I can. I'm still stumped. We have several different servers that we want to redirect from port 80, and most of them will land on the same machine but on port 443. However, one of those needs to land on a different port -- 8443. If this were a simple redirect without a twist, I'd probably go with something like the example shown at , although because I'm doing a one-to-one mapping of multiple FQDNs, and not just mapping a bunch of FQDNs to a single name, I'd be inclined to use an example more like this: server { server_name a.domain.com c.domain.com d.domain.com; # you can serve any number of redirects from here... listen 80; rewrite ^ https://$host$1$uri$is_arg$args permanent; } But with the single server definition listening to port 80 on all interfaces, I don't see how to make that one FQDN get redirected to port 8443 instead of port 443. Am I missing something obvious here? -- Brad Knowles LinkedIn Profile: From appa at perusio.net Wed Feb 29 02:20:13 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 29 Feb 2012 03:20:13 +0100 Subject: http -> https redirection, with a twist? In-Reply-To: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> References: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> Message-ID: <87y5rml6si.wl%appa@perusio.net> On 29 Fev 2012 03h07 CET, brad at shub-internet.org wrote: > Folks, > > I've been trying to figure out how to set this up, I've gone through > as much of the web site and wiki as I can, and I've searched on the > net as much as I can. I'm still stumped. > > We have several different servers that we want to redirect from port > 80, and most of them will land on the same machine but on port 443. > However, one of those needs to land on a different port -- 8443. > > If this were a simple redirect without a twist, I'd probably go with > something like the example shown at > , > although because I'm doing a one-to-one mapping of multiple FQDNs, > and not just mapping a bunch of FQDNs to a single name, I'd be > inclined to use an example more like this: > > server { server_name a.domain.com c.domain.com d.domain.com; # you > can serve any number of redirects from here... listen 80; rewrite ^ > https://$host$1$uri$is_arg$args permanent; } > > But with the single server definition listening to port 80 on all > interfaces, I don't see how to make that one FQDN get redirected to > port 8443 instead of port 443. > > Am I missing something obvious here? If I understood correctly. Try: server { server_name a.domain.com c.domain.com d.domain.com; listen 80; return 301 https://$host:8443$request_uri; } You can use a wildcard to match all subdomains. Perhaps it suits you: server { server_name *.domain.com; # this is more generic [1] listen 80; return 301 https://$host:8443$request_uri; } --- appa [1] http://nginx.org/en/docs/http/server_names.html#wildcard_names From brad at shub-internet.org Wed Feb 29 03:26:01 2012 From: brad at shub-internet.org (Brad Knowles) Date: Tue, 28 Feb 2012 21:26:01 -0600 Subject: http -> https redirection, with a twist? In-Reply-To: <87y5rml6si.wl%appa@perusio.net> References: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> <87y5rml6si.wl%appa@perusio.net> Message-ID: <4DE01239-6917-4089-9852-EF80EF527684@shub-internet.org> On Feb 28, 2012, at 8:20 PM, Ant?nio P. P. Almeida wrote: > If I understood correctly. Try: > > server { > server_name a.domain.com c.domain.com d.domain.com; > listen 80; > return 301 https://$host:8443$request_uri; > } That works for the one site that needs to be redirected to port 8443, but doesn't work for any of the other sites that should instead be redirected to port 443. I need both sets of redirects -- most to port 443, but one to port 8443 instead. > You can use a wildcard to match all subdomains. Perhaps it suits you: > > server { > server_name *.domain.com; # this is more generic [1] > listen 80; > return 301 https://$host:8443$request_uri; > } I would like to avoid wildcards because they're not going to happen in the real world (our list of sites that we serve is static), and I want to prevent redirects from happening for anything but the real sites that we do actually serve. The only queries that would be coming into us that would match the wildcard and would NOT match the static list of sites would be people who are fishing around for security vulnerabilities or other types of less intelligent robots. I don't want them causing any further load on our systems than we will already have. -- Brad Knowles LinkedIn Profile: From quintinpar at gmail.com Wed Feb 29 03:32:43 2012 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 29 Feb 2012 09:02:43 +0530 Subject: =?UTF-8?Q?Re=3A_Sharing_rate_limiting_data_between_multiple_nginx_LB?= =?UTF-8?Q?=E2=80=99s?= In-Reply-To: <20120228125831.GX67687@mdounin.ru> References: <20120228125831.GX67687@mdounin.ru> Message-ID: Thanks Maxim. On Tue, Feb 28, 2012 at 6:28 PM, Maxim Dounin wrote: > Hello! > > On Tue, Feb 28, 2012 at 06:05:51PM +0530, Quintin Par wrote: > > > Hi, > > > > Bumping up an old thread. Can someone please help me with this? > > There is no good solution. Simpliest one is to just use > per-frontend limits. > > Maxim Dounin > > > > > -Quintin > > > > On Tue, Feb 14, 2012 at 7:38 AM, Quintin Par > wrote: > > > > > Can someone help please... > > > -Quintin > > > > > > On Mon, Feb 13, 2012 at 1:11 PM, Quintin Par > wrote: > > > > > >> I have multiple nginx machines running and proxy LB through a round > robin > > >> DNS mechanism. > > >> > > >> I do rate limiting as follows > > >> > > >> limit_req_zone $binary_remote_addr zone=pw:30m rate=20r/m; > > >> > > >> location / { > > >> > > >> limit_req zone=pw burst=5 nodelay; > > >> > > >> But this is per machine. Can this data be shared between the load > > >> balancers so that rate limiting is global and I can scale out. > > >> > > >> -Quintin > > >> > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From appa at perusio.net Wed Feb 29 03:42:37 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 29 Feb 2012 04:42:37 +0100 Subject: http -> https redirection, with a twist? In-Reply-To: <4DE01239-6917-4089-9852-EF80EF527684@shub-internet.org> References: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> <87y5rml6si.wl%appa@perusio.net> <4DE01239-6917-4089-9852-EF80EF527684@shub-internet.org> Message-ID: <87vcmql2z5.wl%appa@perusio.net> On 29 Fev 2012 04h26 CET, brad at shub-internet.org wrote: > On Feb 28, 2012, at 8:20 PM, Ant?nio P. P. Almeida wrote: > >> If I understood correctly. Try: >> >> server { >> server_name a.domain.com c.domain.com d.domain.com; >> listen 80; >> return 301 https://$host:8443$request_uri; >> } > > That works for the one site that needs to be redirected to port > 8443, but doesn't work for any of the other sites that should > instead be redirected to port 443. > > I need both sets of redirects -- most to port 443, but one to port > 8443 instead. Then just define two server blocks. One redirects to 443 and the other to 8443. server { server_name a.domain.com c.domain.com d.domain.com; # redirect to 8443 listen 80; return 301 https://$host:8443$request_uri; } server { server_name e.domain.com f.domain.com g.domain.com; # redirect to 443 listen 80; return 301 https://$host$request_uri; } split the server blocks according to the redirect you want. Alternatively you could use map and a single server block. At the http level. map $host $redirect_port { hostnames; default 443; f.domain.com 8443; # this is the domain that redirects to 8443 } server { server_name a.domain.com c.domain.com d.domain.com f.domain.com; # list all domains listen 80; return 301 https://$host:$redirect_port$request_uri; } --- appa From brad at shub-internet.org Wed Feb 29 03:45:31 2012 From: brad at shub-internet.org (Brad Knowles) Date: Tue, 28 Feb 2012 21:45:31 -0600 Subject: http -> https redirection, with a twist? In-Reply-To: <87vcmql2z5.wl%appa@perusio.net> References: <1DEB84AA-A41A-4294-A871-B6265CDFCDDF@shub-internet.org> <87y5rml6si.wl%appa@perusio.net> <4DE01239-6917-4089-9852-EF80EF527684@shub-internet.org> <87vcmql2z5.wl%appa@perusio.net> Message-ID: <1ED82E9C-27C3-42A2-AA13-5735A735189C@shub-internet.org> On Feb 28, 2012, at 9:42 PM, Ant?nio P. P. Almeida wrote: > Then just define two server blocks. One redirects to 443 and the other > to 8443. I didn't realize that you could have multiple server blocks that were listening to the same port. That was the piece I was missing! > Alternatively you could use map and a single server block. At the http > level. > > map $host $redirect_port { > hostnames; > default 443; > f.domain.com 8443; # this is the domain that redirects to 8443 > } > > server { > server_name a.domain.com c.domain.com d.domain.com f.domain.com; # list all domains > listen 80; > return 301 https://$host:$redirect_port$request_uri; > } Ahh, that's very cool, too. Now I have two solutions for just the one problem. Thanks! -- Brad Knowles LinkedIn Profile: From simone.fumagalli at contactlab.com Wed Feb 29 09:38:26 2012 From: simone.fumagalli at contactlab.com (Simone Fumagalli) Date: Wed, 29 Feb 2012 10:38:26 +0100 Subject: Possible causes for request time taking twice as long as upstream response time In-Reply-To: <4D4B9B4F.1060604@cerego.com> References: <4D4B9B4F.1060604@cerego.com> Message-ID: <4F4DF212.6020406@contactlab.com> On 02/04/2011 07:23 AM, Zev Blut wrote: > While looking at these logs I can see that sometimes these numbers have a fairly large delta. > For example one request's request_time is 1.553 while the upstream_response_time is 0.864. Hello, did you find any more info on this ? I was also doing tuning on my NGINX setup and I also wondering about this delta. Regards. -- Simone From belloni at imavis.com Wed Feb 29 11:13:14 2012 From: belloni at imavis.com (Cristiano Belloni) Date: Wed, 29 Feb 2012 12:13:14 +0100 Subject: writing an alias to a proxy with no buffering In-Reply-To: <20120228182901.GB4114@craic.sysops.org> References: <4F4D0E9E.80009@imavis.com> <20120228182901.GB4114@craic.sysops.org> Message-ID: <4F4E084A.5070005@imavis.com> On 02/28/2012 07:29 PM, Francis Daly wrote: > location /mjpeg { > proxy_buffering off; > proxy_passhttp://unix:/tmp/demo_socket:/; > } It works, thank you. Can I ask how to translate location /stream0 to /stream?s=0 -- Belloni Cristiano Imavis Srl. www.imavis.com belloni at imavis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 29 11:28:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Feb 2012 15:28:24 +0400 Subject: Possible causes for request time taking twice as long as upstream response time In-Reply-To: <4F4DF212.6020406@contactlab.com> References: <4D4B9B4F.1060604@cerego.com> <4F4DF212.6020406@contactlab.com> Message-ID: <20120229112824.GA67687@mdounin.ru> Hello! On Wed, Feb 29, 2012 at 10:38:26AM +0100, Simone Fumagalli wrote: > On 02/04/2011 07:23 AM, Zev Blut wrote: > > While looking at these logs I can see that sometimes these numbers have a fairly large delta. > > For example one request's request_time is 1.553 while the upstream_response_time is 0.864. > > Hello, did you find any more info on this ? I was also doing > tuning on my NGINX setup and I also wondering about this delta. $request_time is a total time of request processing, including reading request from a client and sending response to the client. $upstream_response_time is a time of obtaining response from an upstream server. The $request_time variable is expected to be always larger than $upstream_response_time one as in addition to $upstream_response_time it also includes at least a) reading request from a client and b) sending (buffered part of) the response to the client. Both (a) and (b) may take a while. Maxim Dounin From r at roze.lv Wed Feb 29 11:55:42 2012 From: r at roze.lv (Reinis Rozitis) Date: Wed, 29 Feb 2012 13:55:42 +0200 Subject: dav module temp files Message-ID: <3C1C33B5FA39499B8C2AFF8FE27BF799@DD21> Hello, is there a technical reason why the WebDAV module (for PUTs) always makes a temp file rather than use/honour memory buffer (client_body_buffer_size)? wbr rr From nginx-forum at nginx.us Wed Feb 29 14:17:55 2012 From: nginx-forum at nginx.us (DenisTRUFFAUT) Date: Wed, 29 Feb 2012 09:17:55 -0500 (EST) Subject: icc access (was Re: Using nginx 1.1 with the intel compiler) In-Reply-To: References: Message-ID: <6b2b20ee8ab99ec6d335409ba46d1a9d.NginxMailingListEnglish@forum.nginx.org> Yep, icc is free to download. http://software.intel.com/en-us/articles/non-commercial-software-download/ # Intel C++ Compiler (ICC) # ------------------------------------------------------------------------------ # http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us/cpp/mac/man/icc.txt # http://web.eecs.utk.edu/~lucio/pet/compilerguides/intel-compiler-guide.htm # ICC - From +10% (PHP) to +150% (MySQL) # - http://www.mysqlperformanceblog.com/files/presentations/LinuxWorld2005-Intel.pdf # - http://blog.mudy.info/2009/02/speedup-mysql-and-webserver-with-intel-compiler-and-tcmalloc/ # - http://mysqlha.blogspot.com/2009/01/double-sysbench-throughput-with_18.html # C : icc -fast # C++ : icpc -fast # Fortran : ifort -fast # Documentation : /home/intel/l_ccompxe_intel64_2011.8.273/Documentation/en_US/documentation_c.htm # Videos : www.intel.com/software/products # Support account : https://registrationcenter.intel.com/RegCenter/registerexpress.aspx?clientsn=YOURKEY #sudo cp -fr /www/l_ccompxe_intel64_2011.8.273.tgz /usr/local/src/l_ccompxe_intel64_2011.8.273.tgz # cd /usr/local/src # sudo rm -fr l_ccompxe_intel64_2011.8.273 # sudo wget -O l_ccompxe_intel64_2011.8.273.tgz "http://www.denistruffaut.com/downloads/l_ccompxe_intel64_2011.8.273.tgz" # sudo tar -xvzf l_ccompxe_intel64_2011.8.273.tgz # sudo rm -fr l_ccompxe_intel64_2011.8.273.tgz # cd l_ccompxe_intel64_2011.8.273 # sudo ./install.sh && cd /usr/local/src # sudo rm -fr l_ccompxe_intel64_2011.8.273 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222843,223135#msg-223135 From mdounin at mdounin.ru Wed Feb 29 14:25:18 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Feb 2012 18:25:18 +0400 Subject: icc access (was Re: Using nginx 1.1 with the intel compiler) In-Reply-To: <6b2b20ee8ab99ec6d335409ba46d1a9d.NginxMailingListEnglish@forum.nginx.org> References: <6b2b20ee8ab99ec6d335409ba46d1a9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20120229142517.GF67687@mdounin.ru> Hello! On Wed, Feb 29, 2012 at 09:17:55AM -0500, DenisTRUFFAUT wrote: > Yep, icc is free to download. > > http://software.intel.com/en-us/articles/non-commercial-software-download/ Intel's license FAQ clearly states that non-commercial version can't be used even for free open-source products if one provide paid technical support: http://software.intel.com/en-us/articles/non-commercial-software-faq/#13 That is, we can't use it. Maxim Dounin From mdounin at mdounin.ru Wed Feb 29 14:27:02 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Feb 2012 18:27:02 +0400 Subject: dav module temp files In-Reply-To: <3C1C33B5FA39499B8C2AFF8FE27BF799@DD21> References: <3C1C33B5FA39499B8C2AFF8FE27BF799@DD21> Message-ID: <20120229142702.GG67687@mdounin.ru> Hello! On Wed, Feb 29, 2012 at 01:55:42PM +0200, Reinis Rozitis wrote: > Hello, > is there a technical reason why the WebDAV module (for PUTs) always > makes a temp file rather than use/honour memory buffer > (client_body_buffer_size)? It will use memory buffer for intermediate data, but as it needs file on disk anyway - data are always written to a file. Maxim Dounin From nginx-forum at nginx.us Wed Feb 29 14:38:14 2012 From: nginx-forum at nginx.us (jamessinton) Date: Wed, 29 Feb 2012 09:38:14 -0500 (EST) Subject: Strip Character from Server Variable Message-ID: Hi all, I am trying to do something that sounds very simple and yet has me perplexed. I need to strip the leading / from the $uri variable in my nginx.conf. I need to then pass that modified variable to ProxyPass. I can't for the life of me figure out how to this in the nginx.conf syntax. Thanks in advance. James Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223166,223166#msg-223166 From ne at vbart.ru Wed Feb 29 14:52:40 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 29 Feb 2012 18:52:40 +0400 Subject: Strip Character from Server Variable In-Reply-To: References: Message-ID: <201202291852.40699.ne@vbart.ru> On Wednesday 29 February 2012 18:38:14 jamessinton wrote: > Hi all, > > I am trying to do something that sounds very simple and yet has me > perplexed. I need to strip the leading / from the $uri variable in my > nginx.conf. I need to then pass that modified variable to ProxyPass. > > I can't for the life of me figure out how to this in the nginx.conf > syntax. > > Thanks in advance. > map $uri $uri_stripped { default $uri; ~^/(?P.*)$ $s; } http://wiki.nginx.org/HttpMapModule or if ($uri ~ "^/(.*)$") { set $uri_stripped $1; } http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Feb 29 14:55:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Feb 2012 18:55:00 +0400 Subject: nginx-1.1.16 Message-ID: <20120229145459.GH67687@mdounin.ru> Changes with nginx 1.1.16 29 Feb 2012 *) Change: the simultaneous subrequest limit has been raised to 200. *) Feature: the "from" parameter of the "disable_symlinks" directive. *) Feature: the "return" and "error_page" directives can be used to return 307 redirections. *) Bugfix: a segmentation fault might occur in a worker process if the "resolver" directive was used and there was no "error_log" directive specified at global level. Thanks to Roman Arutyunyan. *) Bugfix: a segmentation fault might occur in a worker process if the "proxy_http_version 1.1" or "fastcgi_keep_conn on" directives were used. *) Bugfix: memory leaks. Thanks to Lanshun Zhou. *) Bugfix: in the "disable_symlinks" directive. *) Bugfix: on ZFS filesystem disk cache size might be calculated incorrectly; the bug had appeared in 1.0.1. *) Bugfix: nginx could not be built by the icc 12.1 compiler. *) Bugfix: nginx could not be built by gcc on Solaris; the bug had appeared in 1.1.15. Maxim Dounin From ne at vbart.ru Wed Feb 29 14:57:35 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Wed, 29 Feb 2012 18:57:35 +0400 Subject: Strip Character from Server Variable In-Reply-To: <201202291852.40699.ne@vbart.ru> References: <201202291852.40699.ne@vbart.ru> Message-ID: <201202291857.35924.ne@vbart.ru> On Wednesday 29 February 2012 18:52:40 Valentin V. Bartenev wrote: > On Wednesday 29 February 2012 18:38:14 jamessinton wrote: > > Hi all, > > > > I am trying to do something that sounds very simple and yet has me > > perplexed. I need to strip the leading / from the $uri variable in my > > nginx.conf. I need to then pass that modified variable to ProxyPass. > > > > I can't for the life of me figure out how to this in the nginx.conf > > syntax. > > > > Thanks in advance. > > map $uri $uri_stripped { > default $uri; > ~^/(?P.*)$ $s; > } > > http://wiki.nginx.org/HttpMapModule > > or > > if ($uri ~ "^/(.*)$") { > set $uri_stripped $1; > } > > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set > or, maybe you just want this: location ~ ^/(.*)$ { ... fastcgi_param SOME $1; # just for example } wbr, Valentin V. Bartenev From jerome.m at gmail.com Wed Feb 29 15:34:15 2012 From: jerome.m at gmail.com (J M) Date: Wed, 29 Feb 2012 10:34:15 -0500 Subject: Stub_status explenation needed In-Reply-To: References: Message-ID: follow up question.. if you a polling it every min and getting 300 "Requests" is it per "Active" connection? tia, On Wed, Feb 22, 2012 at 3:55 AM, piotr.pawlowski wrote: > And everything is clear now, thank you Maxim ! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,222771,222834#msg-222834 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 29 15:44:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Feb 2012 19:44:48 +0400 Subject: Stub_status explenation needed In-Reply-To: References: Message-ID: <20120229154448.GO67687@mdounin.ru> Hello! On Wed, Feb 29, 2012 at 10:34:15AM -0500, J M wrote: > follow up question.. > > if you a polling it every min and getting 300 "Requests" is it per "Active" > connection? ENOPARSE, sorry. "Active" is about number of currently established connections, and "Requests" is about requests seen in the past. These numbers are mostly unrelated. Maxim Dounin From francis at daoine.org Wed Feb 29 19:26:30 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 29 Feb 2012 19:26:30 +0000 Subject: writing an alias to a proxy with no buffering In-Reply-To: <4F4E084A.5070005@imavis.com> References: <4F4D0E9E.80009@imavis.com> <20120228182901.GB4114@craic.sysops.org> <4F4E084A.5070005@imavis.com> Message-ID: <20120229192630.GD4114@craic.sysops.org> On Wed, Feb 29, 2012 at 12:13:14PM +0100, Cristiano Belloni wrote: > On 02/28/2012 07:29 PM, Francis Daly wrote: Hi there, > It works, thank you. Can I ask how to translate location /stream0 to > /stream?s=0 If you mean "the client asks nginx for /stream0 and nginx asks the proxy_pass host for /stream?s=0", then location = /stream0 { proxy_buffering off; proxy_pass http://unix:/tmp/demo_socket:/stream?s=0; } should work. (But you may see odd things if the client asks for /stream0?query=string.) I suspect that you will be better served by advertising the urls that the proxy_pass host recognises, if that is possible. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 29 19:30:47 2012 From: nginx-forum at nginx.us (crazedfred) Date: Wed, 29 Feb 2012 14:30:47 -0500 (EST) Subject: Would like to implement WebSocket support In-Reply-To: References: Message-ID: <5f765764bdc8fc1d7f3906e4433195bb.NginxMailingListEnglish@forum.nginx.org> Alexandr Gomoliako Wrote: ------------------------------------------------------- > On 2/2/12, Andr? Caron > wrote: > > > NGINX modules[2]. After initial reading, I > understand that I need to write > > an Upstream (proxy) handler. Is this correct? > > Not really. > > > The HTTP proxy module has a scary note that > says: > > > >> Note that when using the HTTP Proxy Module (or > even when using FastCGI), > >> the entire client request will be buffered in > nginx before being passed on to > >> the backend proxied servers. > > > > Is this a limitation cause by NGINX's > architecture, or is this by design > > (e.g. for validation of body against headers, > etc.)? > > It just means that you can't use existing upstream > modules and > upstream interface. > > > The bigger problem, however, is that there is no > standard interface to > > application servers for this new WebSocket > protocol. There is some > > discussion[3] on an Apache enhancement request > that basically proposes a > > modification of CGI. Since CGI has already been > demonstrated to be a > > performance problem, I'm looking for an > alternate solution, maybe something > > closer to SCGI? Anyone have suggestions? > > I think what you need here is a simple protocol > upgrade functionality > that switches to tcp proxying for particular > connection once it > encounters upgrade in connection header. And > everything else is up to > application server. > So you don't really need to parse websocket > protocol in nginx unless > it is your application server. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Any progress on this? I would be very interested in a plugin that brings websocket support. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221884,223186#msg-223186 From wmark+nginx at hurrikane.de Wed Feb 29 19:43:24 2012 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Wed, 29 Feb 2012 20:43:24 +0100 Subject: LZ4 + nginx In-Reply-To: References: <201202201326.42065.ne@vbart.ru> Message-ID: You could use BTRFS with its "compression=lzo" mount-option, available since Linux 2.6.38. [1] I don't know if LZ4 and Snappy will integrated into Linux 3.4, but you can try them out pulling changes from [4] http://repo.or.cz/w/linux-2.6/btrfs-unstable.git (dev/compression-squad branch) Compared to LZO, LZ4 doesn't show any measurable performance increase for read-only access. -- Mark [1] https://btrfs.wiki.kernel.org/ [2] http://lwn.net/Articles/411577/ [3] http://www.phoronix.com/scan.php?page=article&item=btrfs_lzo_2638&num=2 [4b] http://www.mail-archive.com/linux-btrfs at vger.kernel.org/msg14884.html Am 21. Februar 2012 12:32 schrieb Ryan Brown : > sorry, I assumed the decryption would've been done by nginx rather > than the browser > > On Mon, Feb 20, 2012 at 10:26 PM, Valentin V. Bartenev wrote: >> On Monday 20 February 2012 04:34:24 Ryan Brown wrote: >>> Just a feature request, >>> >>> Would be nice to have nginx support for LZ4 (like gzip static >>> support), to have an alternative compression method built in.. >>> >>> http://code.google.com/p/lz4/ >>> >> >> Are there any browser that supports it? >> >> ?wbr, Valentin V. Bartenev >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From adrian at navarro.at Wed Feb 29 19:59:03 2012 From: adrian at navarro.at (=?UTF-8?Q?Adri=C3=A1n_Navarro?=) Date: Wed, 29 Feb 2012 20:59:03 +0100 Subject: Uploads with nginx 1.0.12 Message-ID: Hello, I am using file uploads with nginx 1.0.12, php5-fpm and php 5.3.10. Currently, big file uploads (~1300 MB) do take about 30 seconds to process after the file is being uploaded, and the CPU load spikes. Is there a way to prevent that? I am using a very simple script (just a var_dump($_FILES), nothing more, just debugging) and the config looks like the following: location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_pass_request_body off; client_body_in_file_only clean; fastcgi_param REQUEST_BODY_FILE $request_body_file; fastcgi_send_timeout 120; fastcgi_read_timeout 120; fastcgi_index index.php; include fastcgi_params; } Hope that someone can shed light into this? Thank you. Adrian -------------- next part -------------- An HTML attachment was scrubbed... URL: From cliff at develix.com Wed Feb 29 20:35:51 2012 From: cliff at develix.com (Cliff Wells) Date: Wed, 29 Feb 2012 12:35:51 -0800 Subject: Uploads with nginx 1.0.12 In-Reply-To: References: Message-ID: <1330547751.2149.15.camel@portable-evil> On Wed, 2012-02-29 at 20:59 +0100, Adri?n Navarro wrote: > Hello, > > > I am using file uploads with nginx 1.0.12, php5-fpm and php 5.3.10. > > > Currently, big file uploads (~1300 MB) do take about 30 seconds to > process after the file is being uploaded, and the CPU load spikes. Is > there a way to prevent that? I assume that since the spike occurs *after* the upload, the culprit is PHP, not Nginx? > I am using a very simple script (just a var_dump($_FILES), nothing > more, Assuming the spike is caused by PHP, you might take a look at the nginx_upload module, which will handle the entire upload process for you, and just hand your PHP script a path to the uploaded file (along with a few other parameters). Here's some references: http://blog.martinfjordvald.com/2010/08/file-uploading-with-php-and-nginx/ http://brainspl.at/articles/2008/07/20/nginx-upload-module https://github.com/vkholodkov/nginx-upload-module/tree/2.2 Regards, Cliff From adrian at navarro.at Wed Feb 29 20:48:58 2012 From: adrian at navarro.at (=?UTF-8?Q?Adri=C3=A1n_Navarro?=) Date: Wed, 29 Feb 2012 21:48:58 +0100 Subject: Uploads with nginx 1.0.12 In-Reply-To: <1330547751.2149.15.camel@portable-evil> References: <1330547751.2149.15.camel@portable-evil> Message-ID: Indeed, the high load is caused by php-fpm. I was wondering if there was any alternative config to keep fpm from spiking (like, any built-in helper to pass the final location without any plugin?). I will look into the compiled module. I'm not very comfortable with compiling it on the cluster of servers which were previously using dotdeb, but it's better than nothing (uh, tips to keep the debian specific structure when compiling?). Thanks for pointing me out! -Adrian ---------- Forwarded message ---------- From: Cliff Wells Date: Wed, Feb 29, 2012 at 9:35 PM Subject: Re: Uploads with nginx 1.0.12 To: nginx at nginx.org On Wed, 2012-02-29 at 20:59 +0100, Adri?n Navarro wrote: > Hello, > > > I am using file uploads with nginx 1.0.12, php5-fpm and php 5.3.10. > > > Currently, big file uploads (~1300 MB) do take about 30 seconds to > process after the file is being uploaded, and the CPU load spikes. Is > there a way to prevent that? I assume that since the spike occurs *after* the upload, the culprit is PHP, not Nginx? > I am using a very simple script (just a var_dump($_FILES), nothing > more, Assuming the spike is caused by PHP, you might take a look at the nginx_upload module, which will handle the entire upload process for you, and just hand your PHP script a path to the uploaded file (along with a few other parameters). Here's some references: http://blog.martinfjordvald.com/2010/08/file-uploading-with-php-and-nginx/ http://brainspl.at/articles/2008/07/20/nginx-upload-module https://github.com/vkholodkov/nginx-upload-module/tree/2.2 Regards, Cliff _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Adri?n Navarro / (+34) 608 831 094 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Feb 29 21:02:05 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 29 Feb 2012 16:02:05 -0500 Subject: nginx-1.1.16 In-Reply-To: <20120229145459.GH67687@mdounin.ru> References: <20120229145459.GH67687@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.1.16 For Windows http://goo.gl/jKhfv (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Thank you, Kevin -- Kevin Worthington kworthington *@~ #gmail} [dot) {com] http://www.kevinworthington.com/ http://twitter.com/kworthington On Wed, Feb 29, 2012 at 9:55 AM, Maxim Dounin wrote: > Changes with nginx 1.1.16 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?29 Feb 2012 > > ? ?*) Change: the simultaneous subrequest limit has been raised to 200. > > ? ?*) Feature: the "from" parameter of the "disable_symlinks" directive. > > ? ?*) Feature: the "return" and "error_page" directives can be used to > ? ? ? return 307 redirections. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "resolver" directive was used and there was no "error_log" directive > ? ? ? specified at global level. > ? ? ? Thanks to Roman Arutyunyan. > > ? ?*) Bugfix: a segmentation fault might occur in a worker process if the > ? ? ? "proxy_http_version 1.1" or "fastcgi_keep_conn on" directives were > ? ? ? used. > > ? ?*) Bugfix: memory leaks. > ? ? ? Thanks to Lanshun Zhou. > > ? ?*) Bugfix: in the "disable_symlinks" directive. > > ? ?*) Bugfix: on ZFS filesystem disk cache size might be calculated > ? ? ? incorrectly; the bug had appeared in 1.0.1. > > ? ?*) Bugfix: nginx could not be built by the icc 12.1 compiler. > > ? ?*) Bugfix: nginx could not be built by gcc on Solaris; the bug had > ? ? ? appeared in 1.1.15. > > > Maxim Dounin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From adrian at navarro.at Wed Feb 29 21:03:00 2012 From: adrian at navarro.at (=?UTF-8?Q?Adri=C3=A1n_Navarro?=) Date: Wed, 29 Feb 2012 22:03:00 +0100 Subject: Get connected IPs and request addresses Message-ID: Hello, We are currently serving files using X-Accel-Redirect. Previously, we used lighttpd and we parsed the server-status page to retrieve IPs along with the request and GET query: 192.168.10.11 0/0 157998/150305212 write 7 host /get.php?reference=xxxx (We need to store the reference and the IP and track them as long as it's still writing to the connection) Is there a similar way to do such thing in nginx? Modules and other solutions with the same result are welcome. Thank you for your time! -Adri?n From cliff at develix.com Wed Feb 29 21:31:04 2012 From: cliff at develix.com (Cliff Wells) Date: Wed, 29 Feb 2012 13:31:04 -0800 Subject: Uploads with nginx 1.0.12 In-Reply-To: References: <1330547751.2149.15.camel@portable-evil> Message-ID: <1330551064.2149.31.camel@portable-evil> On Wed, 2012-02-29 at 21:48 +0100, Adri?n Navarro wrote: > Indeed, the high load is caused by php-fpm. I was wondering if there > was any alternative config to keep fpm from spiking (like, any > built-in helper to pass the final location without any plugin?). > > > I will look into the compiled module. I'm not very comfortable with > compiling it on the cluster of servers which were previously using > dotdeb, but it's better than nothing (uh, tips to keep the debian > specific structure when compiling?). If you are comfortable building .deb packages, you could just rebuild the .deb with the additional module built in. Alternatively, you could use the output of "nginx -V" to get a list of compile-time options and use those when building from source. In either case, I would make an archive of your /etc/nginx directory just to be safe. I'd also strongly suggest doing a test build outside of your cluster. In general, building Nginx from source is fairly simple, but always better to be cautious. Regards, Cliff From ne at vbart.ru Wed Feb 29 21:42:13 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Thu, 1 Mar 2012 01:42:13 +0400 Subject: Uploads with nginx 1.0.12 In-Reply-To: <1330547751.2149.15.camel@portable-evil> References: <1330547751.2149.15.camel@portable-evil> Message-ID: <201203010142.13723.ne@vbart.ru> On Thursday 01 March 2012 00:35:51 Cliff Wells wrote: > On Wed, 2012-02-29 at 20:59 +0100, Adri?n Navarro wrote: > > Hello, > > > > > > I am using file uploads with nginx 1.0.12, php5-fpm and php 5.3.10. > > > > > > Currently, big file uploads (~1300 MB) do take about 30 seconds to > > process after the file is being uploaded, and the CPU load spikes. Is > > there a way to prevent that? > > I assume that since the spike occurs *after* the upload, the culprit is > PHP, not Nginx? > > > I am using a very simple script (just a var_dump($_FILES), nothing > > more, > > Assuming the spike is caused by PHP, you might take a look at the > nginx_upload module, which will handle the entire upload process for > you, and just hand your PHP script a path to the uploaded file (along > with a few other parameters). > Actually, fastcgi_pass_request_body off; client_body_in_file_only clean; fastcgi_param REQUEST_BODY_FILE $request_body_file; should do approximately the same. wbr, Valentin V. Bartenev From ruslan at rockiesoft.com Wed Feb 29 22:39:10 2012 From: ruslan at rockiesoft.com (Ruslan Dautkhanov) Date: Wed, 29 Feb 2012 15:39:10 -0700 Subject: nginx load balancing Tomcat servers Message-ID: Hello, What is the right way to setup a true load balancing of a few Tomcat servers? By word "true" I understand taking into account at least CPU load of those nodes, not just round robining. I've looked at 3rd party modules, but doesn't look there is the one. Thank you, Ruslan -------------- next part -------------- An HTML attachment was scrubbed... URL: