From igor at sysoev.ru Thu Nov 1 05:17:25 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 1 Nov 2012 09:17:25 +0400 Subject: Incorrect SSL cert chain build order used/required by nginx 1.3.8 ? In-Reply-To: <1351727246.12496.140661147970765.741EC5D0@webmail.messagingengine.com> References: <1351727246.12496.140661147970765.741EC5D0@webmail.messagingengine.com> Message-ID: On Nov 1, 2012, at 3:47 , chiterri at operamail.com wrote: > I'm running nginx/1.3.8 on linux/64. > > I'm installing a commercial cert in nginx (Comodo Essential SSL). > > When I build the SSL chain in order per instructions from Comodo (Root > -> Intermediate(s) > > https://comodosslstore.com/blog/how-do-i-make-my-own-bundle-file-from-crt-files.html > > I do > > cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt > cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt > cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt > cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt > cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt > > > If use this CHAIN'd cert in my nginx conf, > > ssl on; > ssl_verify_client off; > ssl_certificate "/path/to/my.domain.com.CHAIN.crt"; > ssl_certificate_key "/path/to/my.domain.com.key"; > > and start nginx, it fails, > > ==> error.log <== > 2012/10/31 16:36:44 [emerg] 8666#0: > SSL_CTX_use_PrivateKey_file("/path/to/my.domain.com.key") failed > (SSL: error:0B080074:x509 certificate > routines:X509_check_private_key:key values mismatch) > > If I simply switch the cert CHAIN build order, so the personal site crt > is *first* to, > > + cat STAR_domain.com.crt > my.domain.com.CHAIN.crt > - cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt > + cat AddTrustExternalCARoot.crt >> my.domain.com.CHAIN.crt > cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt > cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt > cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt > - cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt > > then start nginx, it starts correctly, with no error. The site's > accessible from most locations. > > But a check with > > https://www.ssllabs.com/ssltest/index.html > > returns/reports > > "Chain issues Incorrect order" > > I'd like to get nginx to accept/use the correct/instructed CHAIN order > so that it starts-up correctly AND is reported 'correct order; by > testing sites. > > Is this is a config issue on my end -- either nginx or the cert build? > Or a bug? http://nginx.org/en/docs/http/configuring_https_servers.html#chains cat STAR_domain.com.crt EssentialSSLCA_2.crt ComodoUTNSGCCA.crt UTNAddTrustSGCCA.crt AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt -- Igor Sysoev http://nginx.com/support.html From yaoweibin at gmail.com Thu Nov 1 05:44:29 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 1 Nov 2012 13:44:29 +0800 Subject: consistent hashing using split_clients In-Reply-To: <92a9002a80000f041bbe90480856003a.NginxMailingListEnglish@forum.nginx.org> References: <20121031145047.GC40452@mdounin.ru> <92a9002a80000f041bbe90480856003a.NginxMailingListEnglish@forum.nginx.org> Message-ID: You can have a look at Tengine. We are developing the consistent hash module: https://github.com/taobao/tengine/pull/68 This module is current used in about 200 production servers. If you like it, we can give you the pure Nginx consistent hash module. 2012/11/1 rmalayter > Maxim Dounin Wrote: > > > > Percentage values are stored in fixed point with 2 digits after > > the point. Configuration parsing will complain if you'll try to > > specify more digits after the point. > > > > > How many "buckets" does the hash table for split_clients > > > have (it doesn't seem to be configurable)? > > > > The split_clients algorithm doesn't use buckets, as it's not a > > hash table. Instead, it calculates hash function of the > > original value, and selects resulting value based on a hash > > function result. See http://nginx.org/r/split_clients for > > details. > > > > So clearly I am down the wrong path here, and split_clients just cannot do > what I need. I will have to rethink things. > > The 3rd-party ngx_http_consistent_hash module appears to be un-maintained, > un-commented. It also uses binary search to find an upstream instead of a > hash table, making it O(log(n)) for each request. My C skills haven't been > used in anger since about 1997, so updating or maintaining it myself would > probably not be a fruitless exercise. > > Perhaps I will have to fall back to using perl to get a hash bucket for the > time being. I assume 4096 upstreams is not a problem for nginx given that > it > is used widely by CDNs. > > A long time ago Igor mentioned he was working on an variable-based upstream > hashing module using MurmurHash3: > http://forum.nginx.org/read.php?29,212712,212739#msg-212739 > > I suppose other work took priority. Maybe Igor has some code stashed > somewhere that just needs testing and polishing. > > If not, it seems that the current "ip_hash" scheme used in nginx could be > easily adapted to fast consistent hashing by simply > -using MurmurHash3 or similar instead of the current simple > multiply+modulo scheme > -allowing arbitrary nginx variables as hash input instead of just the IP > address during upstream selection > -at initialization utilizing a hash table of 4096 or whatever > configurable > number of buckets > -fill the hash table by sorting the server array on > murmurhash3(bucket_number + server_name + server_weight_counter) and taking > the first server > > Is there a mechanism for sponsoring development along these lines and > getting it into the official nginx distribution? Consistent hashing is the > one commonly-used proxy server function that nginx seems to be missing. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,232428,232434#msg-232434 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar at xlrs.de Thu Nov 1 07:29:27 2012 From: ar at xlrs.de (Axel) Date: Thu, 01 Nov 2012 08:29:27 +0100 Subject: Incorrect SSL cert chain build order used/required by nginx 1.3.8 ? In-Reply-To: <1351727246.12496.140661147970765.741EC5D0@webmail.messagingengine.com> References: <1351727246.12496.140661147970765.741EC5D0@webmail.messagingengine.com> Message-ID: <509224D7.5050902@xlrs.de> Hi, I use portecle ( http://portecle.sourceforge.net/ ) to examine ssl certificates. Rgds, Axel Am 01.11.2012 00:47, schrieb chiterri at operamail.com: > I'm running nginx/1.3.8 on linux/64. > > I'm installing a commercial cert in nginx (Comodo Essential SSL). > > When I build the SSL chain in order per instructions from Comodo (Root > -> Intermediate(s) > > https://comodosslstore.com/blog/how-do-i-make-my-own-bundle-file-from-crt-files.html > > I do > > cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt > cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt > cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt > cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt > cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt > > > If use this CHAIN'd cert in my nginx conf, > > ssl on; > ssl_verify_client off; > ssl_certificate "/path/to/my.domain.com.CHAIN.crt"; > ssl_certificate_key "/path/to/my.domain.com.key"; > > and start nginx, it fails, > > ==> error.log <== > 2012/10/31 16:36:44 [emerg] 8666#0: > SSL_CTX_use_PrivateKey_file("/path/to/my.domain.com.key") failed > (SSL: error:0B080074:x509 certificate > routines:X509_check_private_key:key values mismatch) > > If I simply switch the cert CHAIN build order, so the personal site crt > is *first* to, > > + cat STAR_domain.com.crt > my.domain.com.CHAIN.crt > - cat AddTrustExternalCARoot.crt > my.domain.com.CHAIN.crt > + cat AddTrustExternalCARoot.crt >> my.domain.com.CHAIN.crt > cat UTNAddTrustSGCCA.crt >> my.domain.com.CHAIN.crt > cat ComodoUTNSGCCA.crt >> my.domain.com.CHAIN.crt > cat EssentialSSLCA_2.crt >> my.domain.com.CHAIN.crt > - cat STAR_domain.com.crt >> my.domain.com.CHAIN.crt > > then start nginx, it starts correctly, with no error. The site's > accessible from most locations. > > But a check with > > https://www.ssllabs.com/ssltest/index.html > > returns/reports > > "Chain issues Incorrect order" > > I'd like to get nginx to accept/use the correct/instructed CHAIN order > so that it starts-up correctly AND is reported 'correct order; by > testing sites. > > Is this is a config issue on my end -- either nginx or the cert build? > Or a bug? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Thu Nov 1 09:58:21 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Nov 2012 13:58:21 +0400 Subject: consistent hashing using split_clients In-Reply-To: <92a9002a80000f041bbe90480856003a.NginxMailingListEnglish@forum.nginx.org> References: <20121031145047.GC40452@mdounin.ru> <92a9002a80000f041bbe90480856003a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121101095820.GE40452@mdounin.ru> Hello! On Wed, Oct 31, 2012 at 12:46:09PM -0400, rmalayter wrote: > Maxim Dounin Wrote: > > > > Percentage values are stored in fixed point with 2 digits after > > the point. Configuration parsing will complain if you'll try to > > specify more digits after the point. > > > > > How many "buckets" does the hash table for split_clients > > > have (it doesn't seem to be configurable)? > > > > The split_clients algorithm doesn't use buckets, as it's not a > > hash table. Instead, it calculates hash function of the > > original value, and selects resulting value based on a hash > > function result. See http://nginx.org/r/split_clients for > > details. > > > > So clearly I am down the wrong path here, and split_clients just cannot do > what I need. I will have to rethink things. > > The 3rd-party ngx_http_consistent_hash module appears to be un-maintained, > un-commented. It also uses binary search to find an upstream instead of a > hash table, making it O(log(n)) for each request. My C skills haven't been > used in anger since about 1997, so updating or maintaining it myself would > probably not be a fruitless exercise. You may also try memcached_hash module by Tomash Brechko, as available at http://openhack.ru/nginx-patched/wiki/MemcachedHash. It features Ketama consistent hashing compatible with Cache::Memcached::Fast (memcached client module from the same author). Unfortunately, it's more or less unmaintained too, but I think I have patches to bring it up to nginx 0.8.50 at least, and it should be trivial to merge it with more recent versions. > Perhaps I will have to fall back to using perl to get a hash bucket for the > time being. I assume 4096 upstreams is not a problem for nginx given that it > is used widely by CDNs. As long as you use split_clients to actually select a hash bucket, I see no real difference with using embedded perl. > hashing module using MurmurHash3: > http://forum.nginx.org/read.php?29,212712,212739#msg-212739 > > I suppose other work took priority. Maybe Igor has some code stashed > somewhere that just needs testing and polishing. > > If not, it seems that the current "ip_hash" scheme used in nginx could be > easily adapted to fast consistent hashing by simply > -using MurmurHash3 or similar instead of the current simple > multiply+modulo scheme > -allowing arbitrary nginx variables as hash input instead of just the IP > address during upstream selection > -at initialization utilizing a hash table of 4096 or whatever configurable > number of buckets > -fill the hash table by sorting the server array on > murmurhash3(bucket_number + server_name + server_weight_counter) and taking > the first server > > Is there a mechanism for sponsoring development along these lines and > getting it into the official nginx distribution? Consistent hashing is the > one commonly-used proxy server function that nginx seems to be missing. Hash module Igor mentioned is still in the TODO, but no ETA as it's not something frequently asked about. If you want to sponsor the development, please write to the email address listed at http://nginx.com/support.html. -- Maxim Dounin http://nginx.com/support.html From lists at ruby-forum.com Thu Nov 1 10:45:24 2012 From: lists at ruby-forum.com (Yanfeng L.) Date: Thu, 01 Nov 2012 11:45:24 +0100 Subject: Configuring nginx as mail proxy In-Reply-To: <31855C66-EE6A-4AF0-B20F-55630B5FFFFB@kenzanmedia.com> References: <4257CF30-F6E4-4A5E-A471-8752FC4BF439@kenzanmedia.com> <20121024145336.GD40452@mdounin.ru> <002AC47E-8A06-4502-8206-81719078DCBC@kenzanmedia.com> <20121024162652.GG40452@mdounin.ru> <20121024225447.GL40452@mdounin.ru> <31855C66-EE6A-4AF0-B20F-55630B5FFFFB@kenzanmedia.com> Message-ID: Laurent Bonetto wrote in post #1081041: > Maxime, > > Thanks so much. This was the key: >> Note that this must be an IP address, not a hostname. > My mail server was passing me a hostname, which nginx passed to the > authenticate service. I had assumed it was fine to return a hostname. > Returning the IP instead did the trick. > > I have now the proxy working inbound and outbound. > > Much appreciated also your clarifications regarding the low open file > resource and server_name. > > You were of a big help today. > > Laurent Thanks for sharing the info. Can the SMTP/IMAP service behind the mail proxy be SSL based ones like smtp.gmail.com/imap.gmail.com (of course use IP address instead of DNS name for Auth-Server here)? -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Nov 2 11:52:53 2012 From: nginx-forum at nginx.us (antoine2223) Date: Fri, 02 Nov 2012 07:52:53 -0400 Subject: failed (104: Connection reset by peer) Message-ID: <425689be283ce9d13896a0704a6cbb32.NginxMailingListEnglish@forum.nginx.org> hello , hello sir i am beginer, i haev following errors in my logs in my error log , 2012/11/02 12:27:26 [error] 3909#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.250.55, server: pharse.mediactive.fr, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "pharse.mediactive.fr" 2012/11/02 12:27:26 [error] 3909#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.250.55, server: pharse.mediactive.fr, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "pharse.mediactive.fr" 2012/11/02 12:27:26 [error] 3909#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.250.55, server: pharse.mediactive.fr, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "pharse.mediactive.fr" 2012/11/02 12:27:44 [error] 3909#0: *7 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.251.75, server: pharse.mediactive.fr, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "pharse.mediactive.fr" can you please help me out to rectify the error ? my ngnnix conf server { listen 80; server_name pharse.mediactive.fr; root /var/www/Phraseanet/www; index index.php; include rewrite_rules.inc; access_log /var/log/nginx/pharse/access.log; error_log /var/log/nginx/pharse/error.log; rewrite_log on; # PHP scripts -> PHP-FPM server listening on 127.0.0.1:9000 location ~ \.php(/|$) { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # configuration pour les sous definitions location /files { # Point de montage 'X-Accel-Redirect' internal; alias /var/www/Phraseanet/datas; # Chemin d'acces pour 'X-Accel-Redirect' } # configuration pour les fichiers de quarantaine location /lazaret { internal; alias /var/www/Phraseanet/tmp/lazaret; } # configuration pour les telechargements location /download { internal; alias /var/www/Phraseanet/tmp/download; } } ~ my php-fm files look like this . ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; The maximum number of processes FPM will fork. This has been design to control ; the global number of processes when using dynamic PM within a lot of pools. ; Use it with caution. ; Note: A value of 0 indicates no limit ; Default Value: 0 ; process.max = 128 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ; Set open file descriptor rlimit for the master process. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit for the master process. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Specify the event mechanism FPM will use. The following is available: ; - select (any POSIX os) ; - poll (any POSIX os) ; - epoll (linux >= 2.5.44) ; - kqueue (FreeBSD >= 4.1, OpenBSD >= 2.9, NetBSD >= 2.0) ; - /dev/poll (Solaris >= 7) ; - port (Solaris >= 10) ; Default Value: not set (auto detection) ; events.mechanism = epoll ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; To configure the pools it is recommended to have one .conf file per ; pool in the following directory: include=/etc/php5/fpm/pool.d/*.conf request_terminate_timeout=30s ~ please do help me i ma blocked Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232452,232452#msg-232452 From nginx-forum at nginx.us Fri Nov 2 12:47:17 2012 From: nginx-forum at nginx.us (antoine2223) Date: Fri, 02 Nov 2012 08:47:17 -0400 Subject: Nginx 502 bad gateway every few seconds In-Reply-To: References: Message-ID: <80721e56be48c9bc110e3a5479665d6d.NginxMailingListEnglish@forum.nginx.org> can say it please i am really blocked . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,228869,232453#msg-232453 From nginx-forum at nginx.us Fri Nov 2 21:50:08 2012 From: nginx-forum at nginx.us (gerard breiner) Date: Fri, 02 Nov 2012 17:50:08 -0400 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <20121031234856.GD17159@craic.sysops.org> References: <20121031234856.GD17159@craic.sysops.org> Message-ID: <035860bcf91c89ab6299511d9f6e16db.NginxMailingListEnglish@forum.nginx.org> Hello Francis, Francis Daly Wrote: ------------------------------------------------------- > On Wed, Oct 31, 2012 at 07:02:55AM -0400, gerard breiner wrote: > > Hi there, > > > curl -k -i https://127.0.0.1 as > > curl -k -i https://sogo.mydomain.fr give: > > ------------------------------ > > HTTP/1.1 302 Found > > Server: nginx/0.7.67 > > Date: Wed, 31 Oct 2012 10:37:27 GMT > > Content-Type: text/plain; charset=utf-8 > > Connection: keep-alive > > content-length: 0 > > location: /SOGo/ > > -------------------------------- > > So it redirects to /SOGo/. What happens when you do that manually? > > curl -k -i https://127.0.0.1/SOGo/ curl -k -i https://127.0.0.1/SOGo/ give : ----------------------------------------------------- HTTP/1.1 200 OK Server: nginx/1.2.4 Date: Fri, 02 Nov 2012 21:09:16 GMT Content-Type: text/html; charset=utf-8 Content-Length: 2613 Connection: keep-alive WEBMAIL I.A.S.
*

We've detected that your browser version is currently not supported on this site. Our recommendation is to use Firefox. Click on the link below to download the most current version of this browser.

Download Firefox *

Alternatively, you can also use the following compatible browsers

Download Chrome *

Download Safari *

-------------------------------------------------------------------------------------- > > Probably it will redirect again, or else return some html. What you > probably want to do is to manually step through the full login > sequence > until you see the specific problem. Then you can concentrate on that > one request. > > (Also: that doesn't look like nginx 1.2.4. Are you sure that your test > system is exactly what you expect it to be?) I had downgrade nginx cause I wanted to see if I met the same issue with both versions. I've upgrade it to 1.2.4.... > > > From sogo.log > > Oct 31 11:44:05 sogod [29392]: SOGoRootPage successful login for > user > > 'gbreiner' - expire = -1 grace = -1 > > This is from a later time. So some other requests were involved here. > > > [31/Oct/2012:11:44:05 GMT] "POST /SOGoSOGoSOGo/connect HTTP/1.0" 200 > 27/62 > > 0.016 - - 4K > > > > I think the "POST /SOGoSOGoSOGo/" is wrong ... > > Can you see where that request came from? Probably it was the "action" > of a html form within the response of a previous request. Maybe that > will help show why SOGo is repeated here. Before login there is GET /SOGoSOGo After login there is POST /SOGoSOGoSOGo/ > > (That said, the HTTP 200 response suggests that the web server was > happy > with the request.) > > > (it is not the navigator because under apache2 it works very fine). > > Searching the web for "sogo and nginx" returns articles from people > who > claim to have it working. > > I suggest you step back and do exactly one thing at a time. With your > original "location ^~ /SOGo" block, did it all work apart from the > initial redirect? If not, fix that first. > > The SOGo installation guide mentions an apache config file, and says > "The default configuration will use mod_proxy and mod_headers to relay > requests to the sogod parent process. This is suitable for small to > medium deployments.". > Make me puzzle ... For instance , templates will not be loaded without this : location /SOGo.woa/WebServerResources/ { alias /usr/lib/GNUstep/SOGo/WebServerResources/; } > That suggests that your proxy_pass, proxy_redirect, and > proxy_set_header > directives may be enough. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I 'm going to get a look at internet for getting a sogo configuration file for nginx. Yet my configuration come from internet and as it didn't work I tried this and that and this and passed too much time working on this issue. If I managed to get it work I'll come back here for sharing. Thank you indeed Francis for your time and the quality of yours answers. Best regards. Gerard Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232325,232465#msg-232465 From nginx-forum at nginx.us Sat Nov 3 00:11:19 2012 From: nginx-forum at nginx.us (useopenid) Date: Fri, 02 Nov 2012 20:11:19 -0400 Subject: Configuring nginx as mail proxy In-Reply-To: References: Message-ID: I am looking at proxying to google as well, and thus need SSL on the backside (and would like it on general principles for other cases as well), however it does not appear that nginx supports this. I would expect this to be the default if an incoming connection is using ssl, or, at the very least, specified in the protocol parameter in the server {} block (e.g. pop3s, imaps, smtps or smtp-starttls). Neither appears to be the case, nor do I see any options that suggest the ability to enable it in some other way... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232147,232466#msg-232466 From nginx-forum at nginx.us Sat Nov 3 04:16:46 2012 From: nginx-forum at nginx.us (cachito) Date: Sat, 03 Nov 2012 00:16:46 -0400 Subject: Trying to configure an origin pull CDN-like reverse proxy Message-ID: <9910a59486fa2d7b1873dd9704b4a7d6.NginxMailingListEnglish@forum.nginx.org> Hello, I'm hosting a group of Wordpress blogs with about 200k visits and millions of hits per day. MySql + PHP live in a server (beefy VPS) and I placed a reverse proxy in front of it to cache most of the requests. Now I want to offload all the static files to a third server, taking advantage of a feature of common Wordpress cache plugins, that rewrites static file URLs for origin-pull CDN services. This way, an original URL http://blog.com/wp-content/uploads/photo.jpg is rewritten as http://cdn.url.com/wp-content/uploads/photo.jpg and this server requests the file form the original server, caches it and then serves it directly, for the duration of the 1st server's Expires header/directive. I thought it would be easy to use the proxy_* features, but I'm hitting a wall and I can't find an applicable tutorial/article anywhere. Would somebody have any advice on how to do this? This is the basic behavior I'm after: - Client requests static file cdn.blog.com/dir/photo.jpg - cdn.blog.com looks for the file in its cache - If the cache has it, check original or revalidate according with original headers (this is internal, I know). - If the cache doesn't have it, request it from www.blog.com/dir/photo.jpg, cache it and serve it. - Preferably, allow for this to be done for many sites/domains, acting as a CDN server for many sites. This is my conf: The cache zones in otherwise default nginx.conf and before including conf.d/*.conf (I'm on CentOS 6.3 with nginx 1.0.15 from EPEL) proxy_cache_path /var/www/cache/images levels=1:2 keys_zone=images:200m max_size=10g inactive=3d; proxy_cache_path /var/www/cache/scripts levels=1:2 keys_zone=scripts:50m max_size=10g inactive=3d; proxy_cache_path /var/www/cache/pages levels=1:2 keys_zone=pages:200m max_size=10g inactive=3d; And this is the individual server config on conf.d/server1.conf upstream backend_cdn.blog.com { ip_hash; server 333.333.333.333; } server { listen 80; server_name cdn.blog.com; access_log off; # Set proxy headers for the passthrough proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Let the Set-Cookie and Cache-Control headers through. proxy_pass_header Set-Cookie; proxy_pass_header Cache-Control; proxy_pass_header Expires; # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_504 http_404; # Set the proxy cache key set $cache_key $scheme$host$uri$is_args$args; location / { proxy_pass http://backend_$host; proxy_cache pages; proxy_cache_key $cache_key; proxy_cache_valid 15m; # 200, 301 and 302 will be cached. # 2 rules to dedicate the no caching rule for logged in users. # proxy_cache_bypass $wordpress_auth; # Do not cache the response. # proxy_no_cache $wordpress_auth; # Do not serve response from cache. add_header X-Cache $upstream_cache_status; } location ~* \.(png|jpg|jpeg|gif|ico|swf|flv|mov|mpg|mp3)$ { expires max; log_not_found off; proxy_pass http://backend_$host; proxy_cache images; proxy_cache_key $cache_key; } location ~* \.(css|js|html|htm)$ { expires 7d; log_not_found off; proxy_pass http://backend_$host; proxy_cache scripts; proxy_cache_key $cache_key; } } With this configuration, whenever I call a static file such as http://cdn.blog.com/wp-includes/js/prototype.js I end up being redirected to http://www.blog.com/wp-includes/js/prototype.js. I've tried many things, like setting the Host header to various values or adding $uri to the end of the proxy_pass directives, to no avail. One thing to notice is that the 333.333.333.333 server only responds to www.blog.com, not cdn.blog.com. Do I need a root directive in server1.conf? I'm running in circles, any help will be much appreciated. Thanks in advance, Cachito Espinoza Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232468,232468#msg-232468 From howachen at gmail.com Sat Nov 3 05:02:03 2012 From: howachen at gmail.com (howard chen) Date: Sat, 3 Nov 2012 13:02:03 +0800 Subject: Why tcp_nodelay default to on? Message-ID: From the doc: http://wiki.nginx.org/ReadMoreAboutTcpNodelay TCP_NODELAY is for a specific purpose; to disable the Nagle buffering algorithm. It should only be set for applications that send frequent small bursts of information without getting an immediate response, where timely delivery of data is required (the canonical example is mouse movements). So my understanding for most web app, it should be disabled so we can use the "Nagle buffering algorithm", only disable when you have special need, like logging mouse movements as in the example? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sat Nov 3 05:09:48 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Sat, 3 Nov 2012 09:09:48 +0400 Subject: Why tcp_nodelay default to on? In-Reply-To: References: Message-ID: <1E26A24F-1F2E-4A2C-A9B7-34FE74E692DB@sysoev.ru> On Nov 3, 2012, at 9:02 , howard chen wrote: > From the doc: http://wiki.nginx.org/ReadMoreAboutTcpNodelay > > TCP_NODELAY is for a specific purpose; to disable the Nagle buffering algorithm. It should only be set for applications that send frequent small bursts of information without getting an immediate response, where timely delivery of data is required (the canonical example is mouse movements). > > > > So my understanding for most web app, it should be disabled so we can use the "Nagle buffering algorithm", only disable when you have special need, like logging mouse movements as in the example? > http://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay The option is enabled only when a connection is transitioned into the keep-alive state. Otherwise there is 100ms delay when nginx sends response tail in the last incomplete TCP packet. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From howachen at gmail.com Sat Nov 3 06:08:48 2012 From: howachen at gmail.com (howard chen) Date: Sat, 3 Nov 2012 14:08:48 +0800 Subject: How to view Nginx wiki history? Message-ID: For example, this page: http://wiki.nginx.org/WordPress In the Google cache: http://webcache.googleusercontent.com/search?q=cache:_fqHWtIaxmsJ:wiki.nginx.org/WordPress+&cd=1&hl=zh-CN&ct=clnk It contains the following lines: >> # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; >> try_files $uri $uri/ /index.php?$args; But it was removed in the latest version of wiki: http://wiki.nginx.org/WordPress Sometimes we want to know why they are changed, and when they have removed. Anyone mind to explain? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Nov 3 09:28:43 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Nov 2012 09:28:43 +0000 Subject: failed (104: Connection reset by peer) In-Reply-To: <425689be283ce9d13896a0704a6cbb32.NginxMailingListEnglish@forum.nginx.org> References: <425689be283ce9d13896a0704a6cbb32.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121103092843.GG17159@craic.sysops.org> On Fri, Nov 02, 2012 at 07:52:53AM -0400, antoine2223 wrote: Hi there, > hello sir i am beginer, i haev following errors in my logs > > in my error log , > 2012/11/02 12:27:26 [error] 3909#0: *1 recv() failed (104: Connection > reset by peer) while reading response header from upstream, client: > 192.168.250.55, server: pharse.mediactive.fr, request: "GET / HTTP/1.1", > upstream: "fastcgi://127.0.0.1:9000", host: "pharse.mediactive.fr" This says that nginx was talking to the fastcgi server, and the fastcgi server dropped the connection. > can you please help me out to rectify the error ? Look in your fastcgi server logs to see what the problem is. > # PHP scripts -> PHP-FPM server listening on 127.0.0.1:9000 That's the server that should be looked at. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Nov 3 09:44:36 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Nov 2012 09:44:36 +0000 Subject: Trying to configure an origin pull CDN-like reverse proxy In-Reply-To: <9910a59486fa2d7b1873dd9704b4a7d6.NginxMailingListEnglish@forum.nginx.org> References: <9910a59486fa2d7b1873dd9704b4a7d6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121103094436.GH17159@craic.sysops.org> On Sat, Nov 03, 2012 at 12:16:46AM -0400, cachito wrote: Hi there, All untested by me, but... > I thought it would be easy to use the proxy_* features, but I'm hitting a > wall and I can't find an applicable tutorial/article anywhere. Would > somebody have any advice on how to do this? This is the basic behavior I'm > after: > - Client requests static file cdn.blog.com/dir/photo.jpg > - cdn.blog.com looks for the file in its cache > - If the cache has it, check original or revalidate according with original > headers (this is internal, I know). > - If the cache doesn't have it, request it from www.blog.com/dir/photo.jpg, > cache it and serve it. > - Preferably, allow for this to be done for many sites/domains, acting as a > CDN server for many sites. So far, it looks like a straightforward caching reverse proxy setup. I'm not quite sure what the last point means -- but one server{} block per site should work. > upstream backend_cdn.blog.com { > ip_hash; > server 333.333.333.333; > } > > server { > listen 80; > server_name cdn.blog.com; > access_log off; > # Set proxy headers for the passthrough > proxy_set_header Host $host; $host here is probably "cdn.blog.com". What happens if you change this to "proxy_set_header Host www.blog.com;" ? > location ~* \.(css|js|html|htm)$ { > expires 7d; > log_not_found off; > proxy_pass http://backend_$host; > proxy_cache scripts; > proxy_cache_key $cache_key; > } > With this configuration, whenever I call a static file such as > http://cdn.blog.com/wp-includes/js/prototype.js I end up being redirected to > http://www.blog.com/wp-includes/js/prototype.js. I've tried many things, > like setting the Host header to various values or adding $uri to the end of > the proxy_pass directives, to no avail. One thing to notice is that the > 333.333.333.333 server only responds to www.blog.com, not cdn.blog.com. What is the output of curl -i -0 -H 'Host: cdn.blog.com' http://333.333.333.333/wp-includes/js/prototype.js ? That is approximately what nginx will do. (You can add the extra proxy_set_header headers there, if you think it will make a difference.) My guess is that the 333.333.333.333 server returns the http redirect, and nginx is correct in passing that on to the client. The nginx log files should show more details. > Do I need a root directive in server1.conf? If you read from the filesystem, or otherwise access $document_root, then the root directive is used. I don't see that needed for this request. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Nov 3 10:21:55 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Nov 2012 10:21:55 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <035860bcf91c89ab6299511d9f6e16db.NginxMailingListEnglish@forum.nginx.org> References: <20121031234856.GD17159@craic.sysops.org> <035860bcf91c89ab6299511d9f6e16db.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121103102155.GI17159@craic.sysops.org> On Fri, Nov 02, 2012 at 05:50:08PM -0400, gerard breiner wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Wed, Oct 31, 2012 at 07:02:55AM -0400, gerard breiner wrote: Hi there, > > So it redirects to /SOGo/. What happens when you do that manually? > > > > curl -k -i https://127.0.0.1/SOGo/ > > curl -k -i https://127.0.0.1/SOGo/ give : > > ----------------------------------------------------- > HTTP/1.1 200 OK > Server: nginx/1.2.4 > Date: Fri, 02 Nov 2012 21:09:16 GMT > Content-Type: text/html; charset=utf-8 > Content-Length: 2613 > Connection: keep-alive > href="https://sogo.mydomain/SOGo/WebServerResources/favicon.ico" /> So: sogo has been configured with its name, and doesn't just use the Host: header it receives. > href="/SOGo.woa/WebServerResources/generic.css?lm=1351533524" /> And sometimes it uses /SOGo/, and sometimes /SOGO.woa/. From this limited sample, ".woa" includes ?lm=NNN on the end of the url. >

We've detected that your browser version is currently not supported on > this site. Our recommendation is to use Firefox. Click on the link below to > download the most current version of this browser.

And it does browser sniffing. You can try curl -k -i -A 'Mozilla/5' https://127.0.0.1/SOGo/ to see if you can get at the actual content that your browser gets; or when you just use your browser, you can try to "view page source". The aim is to find why and where the problem arises. But: it turns out that that probably isn't necessary here: >

Download > Firefox src="/SOGoSOGo.woa/WebServerResources/browser_firefox.gif?lm=1351283693" > alt="*" />

/SOGoSOGo.woa/ appears in there. So it looks like even this page shows a problem. What do you see here when you go direct to sogo? curl -k -i https://127.0.0.1:20000/SOGo/ Does it have /SOGoSOGo in it? What do you get for the following: curl -k -I https://127.0.0.1/SOGoSOGo.woa/WebServerResources/browser_firefox.gif?lm=1351283693 curl -k -I https://127.0.0.1/SOGo.woa/WebServerResources/browser_firefox.gif?lm=1351283693 curl -k -I https://127.0.0.1/SOGo/WebServerResources/browser_firefox.gif?lm=1351283693 (-I makes a HEAD request, so you should just see the http response headers.) > > > [31/Oct/2012:11:44:05 GMT] "POST /SOGoSOGoSOGo/connect HTTP/1.0" 200 > > 27/62 > > > 0.016 - - 4K > > > > > > I think the "POST /SOGoSOGoSOGo/" is wrong ... > > > > Can you see where that request came from? Probably it was the "action" > > of a html form within the response of a previous request. Maybe that > > will help show why SOGo is repeated here. > > Before login there is GET /SOGoSOGo > After login there is POST /SOGoSOGoSOGo/ It looks like some part of the system is breaking this. I can't tell whether it is sogo itself, or nginx in front of it. > > The SOGo installation guide mentions an apache config file, and says > > "The default configuration will use mod_proxy and mod_headers to relay > > requests to the sogod parent process. This is suitable for small to > > medium deployments.". > > > Make me puzzle ... For instance , templates will not be loaded without this > : > location /SOGo.woa/WebServerResources/ { > alias /usr/lib/GNUstep/SOGo/WebServerResources/; > } That means that these requests are served by nginx from your filesystem. Are templates involved in the login problem, or the /SOGoSOGo problem you report? > I 'm going to get a look at internet for getting a sogo configuration file > for nginx. > Yet my configuration come from internet and as it didn't work I tried this > and that and this and passed too much time working on this issue. If I > managed to get it work I'll come back here for sharing. If you want to keep trying, I would suggest keeping your configuration as simple as possible, and only trying one thing at a time. Find out what exact requests must be made of the sogo server directly, and then configure nginx to produce them. I suspect that you will want something like location ^~ /SOGo/ { proxy_pass http://127.0.0.1:20000; } If you have just that, and you do curl -i -k https://127.0.0.1/SOGo/ do you get a useful response, and does it refer to /SOGoSOGo ? Based on that response, you can build the rest of the config. > Thank you indeed Francis for your time and the quality of yours answers. You're welcome. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Nov 3 10:37:38 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 3 Nov 2012 10:37:38 +0000 Subject: Site URL not completed. Bad redirection ? In-Reply-To: <20121103102155.GI17159@craic.sysops.org> References: <20121031234856.GD17159@craic.sysops.org> <035860bcf91c89ab6299511d9f6e16db.NginxMailingListEnglish@forum.nginx.org> <20121103102155.GI17159@craic.sysops.org> Message-ID: <20121103103738.GJ17159@craic.sysops.org> On Sat, Nov 03, 2012 at 10:21:55AM +0000, Francis Daly wrote: Small correction here: > > > href="https://sogo.mydomain/SOGo/WebServerResources/favicon.ico" /> > > So: sogo has been configured with its name, and doesn't just use the Host: > header it receives. Actually, that is almost certainly due to the proxy_set_header x-webobjects-server-url $scheme://$host; directive, so it isn't a specific configuration within sogo. Which suggests that you'll want this, plus some of the other "proxy_set_header x-webobjects-*" directives... > location ^~ /SOGo/ { > proxy_pass http://127.0.0.1:20000; > } ...in there too. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Nov 3 13:13:30 2012 From: nginx-forum at nginx.us (jonefee) Date: Sat, 03 Nov 2012 09:13:30 -0400 Subject: No upstream_response_time in access log file Message-ID: i use nginx as reverse proxy server , and php-fpm as fast cgi upstream. i found some access in nginx access log file doesen't has a "$upstream_response_time" value but a "-" character instead ,,,, and also has "$response_time" value. why ? for example: 71.213.141.240 - 31/Oct/2012:13:09:34 +0800 POST /php/xyz/iface/?key=be7fc0cdf0afbfedff1e09ec6443823a&device_id=351870052329449&network=1&ua=LT18i&os=2.3.4&version=3.7&category_id=2&f_ps=10&s=5&ps=30&pn=1&pcat=2&up_tm=1351655451272 HTTP/1.1 499 0($body_bytes_sent) - Dalvik/1.4.0 (Linux; U; Android 2.3.4; LT18i Build/4.0.2.A.0.62) 21($content_length) 2.448($request_time) -($upstream_response_time) - - - the response_time is 2.448,,,but no upstream_response_time,,,, the http response code is 499,,,,,dosen't mean that nginx did not finish the "connection" and php-fpm even has no chance to "see" the access ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232482,232482#msg-232482 From matmiller33 at ymail.com Sat Nov 3 18:18:44 2012 From: matmiller33 at ymail.com (Mat Miller) Date: Sat, 3 Nov 2012 18:18:44 +0000 (GMT) Subject: nginx as backend Message-ID: <1351966724.46130.YahooMailNeo@web171803.mail.ir2.yahoo.com> Hi, I am writing this to hopefully get some help of you gus after struggeling with this issue in a few days. I will try to describe the issue as short as pssible. Ok, my website runs just fine on 443 and 80 with the lastest version on nginx. (thx Igor and Nginx team!) The problem started when I tried to put varnish (port 80) in front of Nginx (port 81). 1- Of some reason I have to start Nginx before starting varnish otherwise I get bind() to 0.0.0.0:80 failed (98: Address already in use)... (yes yes I've already checked the config out and I've changed the port to 81) 2- When i start start Nginx first and varnish after, both softwares start to run, my site works just fine on 443 (https) but using http, nginx shows /usr/local/nginx/html/index.htm instead of /var/www/domain.com/index.php lsof -i tcp:80 nginx 1071 root 10u IPv4 4296 0t0 TCP *:www (LISTEN) nginx 1073 www-data 10u IPv4 4296 0t0 TCP *:www (LISTEN) nginx 1074 www-data 10u IPv4 4296 0t0 TCP *:www (LISTEN) nginx 1075 www-data 10u IPv4 4296 0t0 TCP *:www (LISTEN) varnishd 1093 nobody 6u IPv6 4347 0t0 TCP *:www (LISTEN) lsof -i tcp:81 nginx 1071 root 8u IPv4 4294 0t0 TCP *:81 (LISTEN) nginx 1073 www-data 8u IPv4 4294 0t0 TCP *:81 (LISTEN) nginx 1074 www-data 8u IPv4 4294 0t0 TCP *:81 (LISTEN) nginx 1075 www-data 8u IPv4 4294 0t0 TCP *:81 (LISTEN) here is my config files user www-data; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; include /etc/nginx/conf.d/*.conf; port_in_redirect off; } <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< default server { listen 81 default; ## listen for ipv4 set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; server_name _; root /var/www/domain.com; location / { root /var/www/defaultdir; index index.php index.html index.htm; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< domain.com server { server_name .domain.com; root /var/www/domain.com; listen 81; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; listen 443 ssl; {SSL stuffs goes here} location / { index index.php index.html index.htm; access_log off; error_log off; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param HTTPS on; } } I would appreciate any help I can get with this issue. Best regards, Mat? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at ftwentertainment.com Sat Nov 3 21:45:18 2012 From: brad at ftwentertainment.com (Brad Riemann) Date: Sat, 3 Nov 2012 21:45:18 +0000 Subject: Configuration issues with 100-150MB files downloads causing high CPU usage.. Message-ID: Hello, I'm hoping someone would be able to assist in my dilemma, I've searched for a few weeks on a fix with no success.. I run Nginx in proxy pass-through mode to apache.. (I hope I got the right terminology..), but basically, we've been seeing issues with our Media server, where once we start getting enough requests to make the bandwidth get above 100mbit, we start seeing drops on our cacti graphs.. when i can get online as the time, I notice that Nginx is utilizing a ton of CPU causing the server to slow down considerably.. I'm worried that my configuration has some issues and I would really appreciate the help from the community. Details on the server, brand new dell r515 (6 cores, 16GB RAM, 2 15K SAS drives for the os in a RAID 0, 12 2TB HDDs in a RAID 6 for the media). Average file size is 120MB, they don't deviate more than +/-5% of that size.. so it's quite consistent.. Here is the config we use.. We have a custom built CDN that uses nginx, and we can get the same files served over 120mbit with no issues.. so I'm sure it's something I tweaked that is wrong.. user nobody; worker_processes 10; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile off; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 512; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 64k; output_buffers 4 64k; postpone_output 1460; proxy_temp_path /nginx_temp/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } Thank you again for any help, I'm including graphs from the last 24 hours, you can see the system have hiccups when the amount of downloads gets to a certain point.. https://the-irc.com/images/zeus-odity.PNG https://the-irc.com/images/zeus-odity2.PNG Brad Riemann Systems Engineer FTW Entertainment LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Nov 4 02:25:50 2012 From: nginx-forum at nginx.us (raidenii) Date: Sat, 03 Nov 2012 22:25:50 -0400 Subject: Reverse proxy requires http header from upstream? Message-ID: Hi all, I'm trying to use nginx as a reverse proxy for several softwares that comes with webui, e.g. Transmission, MLDonkey, etc. All are working well except for aMule, which I can connect directly to its port, but when I use nginx it always prompts timeout in error log. The major difference between aMule and the rest is that amule web server does not give a http header to the client, for when I try to use curl to grab the header it gives an empty response. (Although it will return the correct webpage when curl grabs the webpage) And I suppose that is the reason nginx thinks connection timeout. So I wonder if there is any way to let nginx "ignore" the http header from the upstream and just pass the data whatever the upstream gives? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232492,232492#msg-232492 From contact at jpluscplusm.com Sun Nov 4 12:28:12 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 4 Nov 2012 12:28:12 +0000 Subject: Reverse proxy requires http header from upstream? In-Reply-To: References: Message-ID: On 4 November 2012 02:25, raidenii wrote: > The major difference between aMule and the rest is that amule web server > does not give a http header to the client, for when I try to use curl to > grab the header it gives an empty response. (Although it will return the > correct webpage when curl grabs the webpage) And I suppose that is the > reason nginx thinks connection timeout. > > So I wonder if there is any way to let nginx "ignore" the http header from > the upstream and just pass the data whatever the upstream gives? Thanks in > advance. When you say "http header", which one do you mean? If you mean *all* headers are missing, and the server simple responds with data, then the server isn't actually talking HTTP at all, and nginx is not the right tool for the job. If that's the case, you should consider load-balancing/reverse-proxying it either with something that does talk the protocol that's being used, or something generic that can proxy at the TCP layer, like HAProxy. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Sun Nov 4 13:57:38 2012 From: nginx-forum at nginx.us (raidenii) Date: Sun, 04 Nov 2012 08:57:38 -0500 Subject: Reverse proxy requires http header from upstream? In-Reply-To: References: Message-ID: <5785dd5ce03c48afb52bac86f49f3cba.NginxMailingListEnglish@forum.nginx.org> Jonathan Matthews Wrote: ------------------------------------------------------- > On 4 November 2012 02:25, raidenii wrote: > > The major difference between aMule and the rest is that amule web > server > > does not give a http header to the client, for when I try to use > curl to > > grab the header it gives an empty response. (Although it will return > the > > correct webpage when curl grabs the webpage) And I suppose that is > the > > reason nginx thinks connection timeout. > > > > So I wonder if there is any way to let nginx "ignore" the http > header from > > the upstream and just pass the data whatever the upstream gives? > Thanks in > > advance. > > When you say "http header", which one do you mean? > > If you mean *all* headers are missing, and the server simple responds > with data, then the server isn't actually talking HTTP at all, and > nginx is not the right tool for the job. > > If that's the case, you should consider > load-balancing/reverse-proxying it either with something that does > talk the protocol that's being used, or something generic that can > proxy at the TCP layer, like HAProxy. > > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I have tried backend upstream+proxy_pass, but seems not working either. Strangely when using browser to acess directly there is no problem at all. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232492,232494#msg-232494 From ne at vbart.ru Sun Nov 4 13:57:45 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sun, 4 Nov 2012 17:57:45 +0400 Subject: How to view Nginx wiki history? In-Reply-To: References: Message-ID: <201211041757.45381.ne@vbart.ru> On Saturday 03 November 2012 10:08:48 howard chen wrote: > Re: How to view Nginx wiki history? You should register an account to view the history: http://wiki.nginx.org/index.php?title=Special:UserLogin wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From contact at jpluscplusm.com Sun Nov 4 14:07:32 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 4 Nov 2012 14:07:32 +0000 Subject: How to view Nginx wiki history? In-Reply-To: <201211041757.45381.ne@vbart.ru> References: <201211041757.45381.ne@vbart.ru> Message-ID: On 4 November 2012 13:57, Valentin V. Bartenev wrote: > On Saturday 03 November 2012 10:08:48 howard chen wrote: >> Re: How to view Nginx wiki history? > > You should register an account to view the history: > http://wiki.nginx.org/index.php?title=Special:UserLogin That really sucks. Can it be configured differently to allow non-authenticated users access to page histories? As we're on the topic of historic documentation across a versioned piece of software like nginx, I'd like to put in a plea here for something to be worked out to expose the mapping of config directives to the version in which they were introduced, or when their behaviours changed - in a *structured* format. IMHO that's a really major documentation flaw in both the wiki and the official documentation, as the inline notes about version applicability are both sparse and not formalised/structured at all usefully. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mozatkey at gmail.com Mon Nov 5 07:45:22 2012 From: mozatkey at gmail.com (mozatkey) Date: Mon, 05 Nov 2012 15:45:22 +0800 Subject: how to find a suitable number for the keepalive connections ? Message-ID: <50976E92.4020100@gmail.com> The |/connections/| parameter should be set low enough(??) to allow upstream servers to process additional new incoming connections as well. I am very confused about the keepalive connections in the ngx_http_upstream_module, can somebody tell me how to determine the keepalive connections in http protocol ? just like this: upstream http_backend { server 127.0.0.1:8080; keepalive 16;// why 16 ? how to determine a suitable number ? } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Nov 5 11:00:32 2012 From: nginx-forum at nginx.us (technoplague) Date: Mon, 05 Nov 2012 06:00:32 -0500 Subject: nginx rewrite of certain mime types and request processing order Message-ID: Hi, Having following nginx virtual host config: #============================================== upstream cf-domain { server unix:/data/www/prod/domain.com/cf/cf-domain.sock; } server { listen 10.0.0.1:80; listen 127.0.0.1:28005; server_name cf.domain.com; index index.php index.html; root /data/www/prod/domain.com/cf/htdocs; charset utf-8; access_log /data/www/prod/domain.com/cf/logs/frontend/access.log; error_log /data/www/prod/domain.com/cf/logs/frontend/error.log; client_max_body_size 8m; # get rewrite location /get/ { rewrite ^/get/([^/]+)/([^/]+)$ /get/index.php?contid=$1 last; } # static content location ~* \.(jpg|jpeg|gif|css|png|js|swf|ico|rar|zip)$ { expires max; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_pass cf-domain; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } #include conf.d/cf.dirs.conf; } #============================================== The problem is with processing order and /get location rewrite. The request: http://cf.domain.com/get/68175/fingergr.swf returns 404, instead by getting processed by /get location rewrite rule. This is because swf is defined in (static content) location: # static content location ~* \.(jpg|jpeg|gif|css|png|js|swf|ico|rar|zip)$ { expires max; access_log off; log_not_found off; } How is it possible to have /get location rewrite working together without removing swf type from static content location? Thanks, T Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232503,232503#msg-232503 From francis at daoine.org Mon Nov 5 11:20:54 2012 From: francis at daoine.org (Francis Daly) Date: Mon, 5 Nov 2012 11:20:54 +0000 Subject: nginx rewrite of certain mime types and request processing order In-Reply-To: References: Message-ID: <20121105112054.GM17159@craic.sysops.org> On Mon, Nov 05, 2012 at 06:00:32AM -0500, technoplague wrote: Hi there, > The problem is with processing order and /get location rewrite. > The request: http://cf.domain.com/get/68175/fingergr.swf returns 404, > instead by getting processed by /get location rewrite rule. This is because > swf is defined in (static content) location: > > # static content > location ~* \.(jpg|jpeg|gif|css|png|js|swf|ico|rar|zip)$ { > expires max; > access_log off; > log_not_found off; > } > > How is it possible to have /get location rewrite working together without > removing swf type from static content location? http://nginx.org/r/location Note particularly the various things in between "location" and "uri", and the example configurations D and E. f -- Francis Daly francis at daoine.org From howachen at gmail.com Mon Nov 5 17:12:05 2012 From: howachen at gmail.com (howard chen) Date: Tue, 6 Nov 2012 01:12:05 +0800 Subject: fastcgi_intercept_errors & error_page Message-ID: According to the doc: http://wiki.nginx.org/HttpFastcgiModule#fastcgi_intercept_errors Note: You need to explicitly define the error_page handler for this for it to be useful. As Igor says, "nginx does not intercept an error if there is no custom handler for it it does not show its default pages. This allows to intercept some errors, while passing others as are." Actually I still can't understand the exact meaning, so I have done some experimentd. 1. turn on fastcgi_intercept_errors, - in the backend php/fcgi send 404 header, - set the error_page (php) Result: nginx use the default error template 2. turn off fastcgi_intercept_errors, - in the backend php/fcgi send 404 header - set the error_page (php) Result: now the custom error_page (php) is being used. So it seems to me that* fastcgi_intercept_errors should be off and set the error_page *if I need to specify custom error handler, is this interoperation correct? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at zwobble.org Mon Nov 5 21:24:20 2012 From: mike at zwobble.org (Michael Williamson) Date: Mon, 05 Nov 2012 21:24:20 +0000 Subject: Relocatable installations Message-ID: <1352150660.27907.140661149870841.4E360780@webmail.messagingengine.com> Hello, Is it possible to compile nginx such that installations can be moved? For instance, say we install nginx to /tmp/original/nginx: ./configure --prefix=/tmp/original/nginx make make install And we then move the installation directory to /tmp/relocated/nginx: mv /tmp/original/nginx /tmp/relocated/nginx Is there a way to run nginx in its new directory as though it had been installed there directly, using either compile- or run-time options? Thanks Mike From r at roze.lv Mon Nov 5 21:43:01 2012 From: r at roze.lv (Reinis Rozitis) Date: Mon, 5 Nov 2012 23:43:01 +0200 Subject: Relocatable installations In-Reply-To: <1352150660.27907.140661149870841.4E360780@webmail.messagingengine.com> References: <1352150660.27907.140661149870841.4E360780@webmail.messagingengine.com> Message-ID: <09F705EE3C6D4E7F95FB7DCBB643B37E@MezhRoze> > Is there a way to run nginx in its new directory as though it had been > installed there directly, using either compile- or run-time options? You can use the -p switch after 'nginx' binary to change the default install time prefix (and/or -c for different path for config file). ./nginx -p /tmp/relocated/nginx http://wiki.nginx.org/CommandLine rr From mike at zwobble.org Mon Nov 5 21:52:29 2012 From: mike at zwobble.org (Michael Williamson) Date: Mon, 05 Nov 2012 21:52:29 +0000 Subject: Relocatable installations In-Reply-To: <09F705EE3C6D4E7F95FB7DCBB643B37E@MezhRoze> References: <1352150660.27907.140661149870841.4E360780@webmail.messagingengine.com> <09F705EE3C6D4E7F95FB7DCBB643B37E@MezhRoze> Message-ID: <1352152349.4292.140661149880961.34063152@webmail.messagingengine.com> > > Is there a way to run nginx in its new directory as though it had been > > installed there directly, using either compile- or run-time options? > > You can use the -p switch after 'nginx' binary to change the default > install time prefix (and/or -c for different path for config file). Aha! That seems to be what I'm looking for. Not sure how I missed that -- I think I must have been expecting it to be a compile-time option rather than a run-time option. Thanks for pointing it out. Mike From nginx-forum at nginx.us Tue Nov 6 05:40:34 2012 From: nginx-forum at nginx.us (justin) Date: Tue, 06 Nov 2012 00:40:34 -0500 Subject: Dynamic error_log location using regex captures Message-ID: <82baf78bd05552075b64fec7d06d3b14.NginxMailingListEnglish@forum.nginx.org> Why can't I use a dynamic error_log location like I do with access_log? Example config: server_name ~^(?.+)\.my-domain\.com$; root /srv/www/accounts/$username/app; access_log /var/log/nginx/accounts/$username/access.log; error_log /var/log/nginx/accounts/$username/error.log; This does not work, since for some reason nginx chokes out on the dynamic error_log location. Feature request? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232516,232516#msg-232516 From nginx-forum at nginx.us Tue Nov 6 08:50:21 2012 From: nginx-forum at nginx.us (merryflip) Date: Tue, 06 Nov 2012 03:50:21 -0500 Subject: How to easily make digitize books or magazines? In-Reply-To: <1fdad5a9284948bd872e7647ccf8551e@ruby-forum.com> References: <1fdad5a9284948bd872e7647ccf8551e@ruby-forum.com> Message-ID: I have seen many .pdf and other file formats of books and magazines. I was wondering how they are digitized, especially in mass quantities? Not only how is it done, but how can it be done at a realistic pace? I can't see scanning every single page and not getting very good quality. There mus be an easier way to get both better results and faster speeds. I would recommend you to use xFlip flip pages software. It is pretty nice and easy to use. You just need to import the needed document of almost every format, edit the look of your publication and click on "Publish". Your book will be created shortly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,151087,232518#msg-232518 From nginx-forum at nginx.us Tue Nov 6 12:10:35 2012 From: nginx-forum at nginx.us (baroc) Date: Tue, 06 Nov 2012 07:10:35 -0500 Subject: what is wrong live stream http conf Message-ID: have created the following simple conf .. #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; limit_conn_zone $remote_user zone=peruser:10m; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; server { listen 9901; server_name 5.152.204.130; location = / { auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/users/user; limit_conn peruser 1; proxy_pass http://192.168.1.200:44444; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_buffering off; proxy_ignore_client_abort off; proxy_connect_timeout 180; proxy_send_timeout 21600; proxy_read_timeout 21600; proxy_max_temp_file_size 0; } #if ($http_user_agent ~* "vlc" ) {return 403; } error_page 403 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/nginx/html; } } } Connected to this conf after 50 people are starting to freeze. ( I saw the other stream in the port flowed smoothly but freezes when I connect nginx port. ) I wonder where did I go wrong? thank you in advance .... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232525,232525#msg-232525 From ne at vbart.ru Tue Nov 6 12:33:20 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Tue, 6 Nov 2012 16:33:20 +0400 Subject: Dynamic error_log location using regex captures In-Reply-To: <82baf78bd05552075b64fec7d06d3b14.NginxMailingListEnglish@forum.nginx.org> References: <82baf78bd05552075b64fec7d06d3b14.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211061633.20453.ne@vbart.ru> On Tuesday 06 November 2012 09:40:34 justin wrote: > Why can't I use a dynamic error_log location like I do with access_log? > [...] Because it is "error_log" that may contain critical errors, therefore it must always be accessible. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From black.fledermaus at arcor.de Tue Nov 6 13:37:36 2012 From: black.fledermaus at arcor.de (basti) Date: Tue, 06 Nov 2012 14:37:36 +0100 Subject: example.com/foo redirect to example.com/foo/index.php Message-ID: <509912A0.1060800@arcor.de> hello, my config looks like: server { listen 443; server_name foobar; access_log /var/log/nginx/https.access_log; error_log /var/log/nginx/https.error_log debug; index index.php index.html index.htm; fastcgi_index index.php; root /home/www/ssl/foobar; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location ~ ^/mailadmin/(.*\.php)$ { root/home/www/ssl/foobar/mailadmin/bin/$1; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; } } a request tohttps://foobar/mailadmin/index.php is done very well and the index.php is shown a request tohttps://foobar/mailadmin/ is shown Error 403 Forbidden How can i solve that a request tohttps://foobar/mailadmin/ is also shown the index.php ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From howachen at gmail.com Tue Nov 6 13:42:22 2012 From: howachen at gmail.com (howard chen) Date: Tue, 6 Nov 2012 21:42:22 +0800 Subject: Any caveat in using post_action? Message-ID: Hi, I am going to use post_action for mirroring the load for performance testing to new backend. But according to the doc: >> Note: this directive "has subtleties" according to Maxim Dounin, so use at your own risk. Some questions: 1. Is the post_action blocking to the user request? Or the sub request is truly async and not affecting to the user? 2. And caveat is using high load env, e.g. stability? So anyone mind share your experience? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 6 14:43:20 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Nov 2012 18:43:20 +0400 Subject: Any caveat in using post_action? In-Reply-To: References: Message-ID: <20121106144320.GO40452@mdounin.ru> Hello! On Tue, Nov 06, 2012 at 09:42:22PM +0800, howard chen wrote: > Hi, > > I am going to use post_action for mirroring the load for performance > testing to new backend. > > But according to the doc: > > >> Note: this directive "has subtleties" according to Maxim Dounin, so use > at your own risk. There is no official documentation on the "post_action" directive (you may check yourself at http://nginx.org/en/docs/), and it's on purpose. What you are quoting is wiki. > Some questions: > > 1. Is the post_action blocking to the user request? Or the sub request is > truly async and not affecting to the user? The post_action is executed in the context of the main request, and will block further work on the same client connection till it's complete. That is, it's unlikely a good idea to use post_action for performance testing. I would recommend logging requests instead and re-executing them with a separate process. [...] -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Tue Nov 6 22:31:54 2012 From: nginx-forum at nginx.us (tharsan) Date: Tue, 06 Nov 2012 17:31:54 -0500 Subject: Writing the total request time in seconds to an nginx access log, possibly using a calculated variable Message-ID: I'm trying to modify my nginx access log format to include the request duration, in seconds. I see two possible variables I could use: 1) $request_time (http://wiki.nginx.org/HttpLogModule#log_format) 2) $upstream_response_time (http://wiki.nginx.org/NginxHttpUpstreamModule#.24upstream_response_time) However both of these variables are expressed in microseconds, and I need this value to be rendered in seconds. Is there any way to specify the output as an expression (i.e. $request_time * 1000) or accomplish this in some other way? Thanks, Tharsan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232547,232547#msg-232547 From tharanga.abeyseela at gmail.com Wed Nov 7 01:43:40 2012 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Wed, 7 Nov 2012 12:43:40 +1100 Subject: nginx auth_basic with proxy pass to tomcat Message-ID: Hi Guys, I need to add basic auth to my home page (index.html) (Served by nginx) and other directories resides on tomcat7. is there anyway i can add only authentication to index.html . i was using the following nginx configuration. server { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.html; root /var/www/; server_name xxxxxxxx; } location / { auth_basic "Restricted"; auth_basic_user_file /var/www/.htpass; } location /next { proxy_pass http://localhost:8080/next; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; } when i try to add the above config, it asks for the user/pass, but it asks for the user/pass when i try to access /next. but i need to add authentication only to index.html. problem is using the root directory, so all requests will be tunneled through root and prompted for a password. but is there any way i can restrict access only to index.html, once it authenticated, users will be able to access tomcat paths . Thanks in advance, Tharanga From tharanga.abeyseela at gmail.com Wed Nov 7 01:46:28 2012 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Wed, 7 Nov 2012 12:46:28 +1100 Subject: nginx auth_basic with proxy pass to tomcat Message-ID: Hi Guys, I need to add basic auth to my home page (index.html) (Served by nginx) and other directories resides on tomcat7. is there anyway i can add only authentication to index.html . i was using the following nginx configuration. server { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.html; root /var/www/; server_name xxxxxxxx; } location / { auth_basic "Restricted"; auth_basic_user_file /var/www/.htpass; } location /next { proxy_pass http://localhost:8080/next; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; } when i try to add the above config, it asks for the user/pass, but it asks for the user/pass when i try to access /next. but i need to add authentication only to index.html. problem is using the root directory, so all requests will be tunneled through root and prompted for a password. but is there any way i can restrict access only to index.html, once it authenticated, users will be able to access tomcat paths . Thanks in advance, Tharanga From david at styleflare.com Wed Nov 7 01:47:38 2012 From: david at styleflare.com (David J) Date: Tue, 6 Nov 2012 20:47:38 -0500 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: References: Message-ID: Yeah use /index.HTML for the location block On Nov 6, 2012 8:43 PM, "Tharanga Abeyseela" wrote: > Hi Guys, > > I need to add basic auth to my home page (index.html) (Served by > nginx) and other directories resides on tomcat7. is there anyway i > can add only authentication to index.html . i was using the following > nginx configuration. > > server { > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > index index.html; > root /var/www/; > server_name xxxxxxxx; > } > > location / { > auth_basic "Restricted"; > auth_basic_user_file /var/www/.htpass; > } > > > > location /next { > proxy_pass http://localhost:8080/next; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_max_temp_file_size 0; > } > > when i try to add the above config, it asks for the user/pass, but it > asks for the user/pass when i try to access /next. but i need to add > authentication only to index.html. problem is using the root > directory, so all requests will be tunneled through root and prompted > for a password. but is there any way i can restrict access only to > index.html, once it authenticated, users will be able to access tomcat > paths . > > Thanks in advance, > Tharanga > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.abeyseela at gmail.com Wed Nov 7 01:52:56 2012 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Wed, 7 Nov 2012 12:52:56 +1100 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: References: Message-ID: Thanks David, i tried it. but it still asks the user/pass when i hit the /next inside index.html any idea why ? thanks, tharanga On Wed, Nov 7, 2012 at 12:47 PM, David J wrote: > Yeah use /index.HTML for the location block > > On Nov 6, 2012 8:43 PM, "Tharanga Abeyseela" > wrote: >> >> Hi Guys, >> >> I need to add basic auth to my home page (index.html) (Served by >> nginx) and other directories resides on tomcat7. is there anyway i >> can add only authentication to index.html . i was using the following >> nginx configuration. >> >> server { >> access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; >> index index.html; >> root /var/www/; >> server_name xxxxxxxx; >> } >> >> location / { >> auth_basic "Restricted"; >> auth_basic_user_file /var/www/.htpass; >> } >> >> >> >> location /next { >> proxy_pass http://localhost:8080/next; >> proxy_redirect off; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_max_temp_file_size 0; >> } >> >> when i try to add the above config, it asks for the user/pass, but it >> asks for the user/pass when i try to access /next. but i need to add >> authentication only to index.html. problem is using the root >> directory, so all requests will be tunneled through root and prompted >> for a password. but is there any way i can restrict access only to >> index.html, once it authenticated, users will be able to access tomcat >> paths . >> >> Thanks in advance, >> Tharanga >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From Alex.Samad at yieldbroker.com Wed Nov 7 01:55:13 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Wed, 7 Nov 2012 01:55:13 +0000 Subject: sticky module was -> RE: AJP Message-ID: Hi Late reply, wondering if any one has any example of setting up stick module and using jsessionid cookie ? Alex > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of J?r?me Loyet > Sent: Friday, 26 October 2012 7:24 AM > To: nginx at nginx.org > Subject: Re: AJP > > 2012/10/25 Alex Samad - Yieldbroker : > > Hi > > > > [snip] > > > >> >> Behalf Of J?r?me Loyet > >> >> Sent: Friday, 26 October 2012 3:35 AM > >> >> To: nginx at nginx.org > >> >> Subject: Re: AJP > > [snip] > > > >> > > >> > Still my question though how did you deal with stickiness? > >> > >> http://code.google.com/p/nginx-sticky-module/ > > > > So just for my understand what is the difference between this 3rd party > add on and the ajp 3rd party addon. > > > > I understand one is on the wiki and one is not. But does that mean the > sticky module is supported as well ? > > > > Does stick respect jsession cookie. So if server fails will the connection go to > new server and then eventually bounce back to the restored old server ? > > > sticky module does not deal with jsession cookie. It creates a cookie on its > own which indicates which backend has been used and nginx will always use > the backend from the cookie. > > If no cookie are sent (first time), nginx will use standard round robin to > choose one server and then send back a cookie indicating which backend > server has been used. > > If the backend from the server is down, classic round robin takes place and a > new cookie is sent with the new backend to use. > > see the main page of http://code.google.com/p/nginx-sticky-module/ > with the explanation and a smal schematic which is far better than any > explaination > > > > > > > > >> > >> simple and efficient ! works like a charms without any configuration > >> on the tomcat side and no need to setup complex session sharing > >> system on the tomcat side > > > > That's good like KISS > > > > [snip] > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From yaoweibin at gmail.com Wed Nov 7 02:23:10 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 7 Nov 2012 10:23:10 +0800 Subject: sticky module was -> RE: AJP In-Reply-To: References: Message-ID: Have a look at this module? http://code.google.com/p/nginx-upstream-jvm-route/ Thanks. 2012/11/7 Alex Samad - Yieldbroker > Hi > > Late reply, wondering if any one has any example of setting up stick > module and using jsessionid cookie ? > > Alex > > > -----Original Message----- > > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > > Behalf Of J?r?me Loyet > > Sent: Friday, 26 October 2012 7:24 AM > > To: nginx at nginx.org > > Subject: Re: AJP > > > > 2012/10/25 Alex Samad - Yieldbroker : > > > Hi > > > > > > [snip] > > > > > >> >> Behalf Of J?r?me Loyet > > >> >> Sent: Friday, 26 October 2012 3:35 AM > > >> >> To: nginx at nginx.org > > >> >> Subject: Re: AJP > > > [snip] > > > > > >> > > > >> > Still my question though how did you deal with stickiness? > > >> > > >> http://code.google.com/p/nginx-sticky-module/ > > > > > > So just for my understand what is the difference between this 3rd party > > add on and the ajp 3rd party addon. > > > > > > I understand one is on the wiki and one is not. But does that mean the > > sticky module is supported as well ? > > > > > > Does stick respect jsession cookie. So if server fails will the > connection go to > > new server and then eventually bounce back to the restored old server ? > > > > > > sticky module does not deal with jsession cookie. It creates a cookie on > its > > own which indicates which backend has been used and nginx will always use > > the backend from the cookie. > > > > If no cookie are sent (first time), nginx will use standard round robin > to > > choose one server and then send back a cookie indicating which backend > > server has been used. > > > > If the backend from the server is down, classic round robin takes place > and a > > new cookie is sent with the new backend to use. > > > > see the main page of http://code.google.com/p/nginx-sticky-module/ > > with the explanation and a smal schematic which is far better than any > > explaination > > > > > > > > > > > > > > >> > > >> simple and efficient ! works like a charms without any configuration > > >> on the tomcat side and no need to setup complex session sharing > > >> system on the tomcat side > > > > > > That's good like KISS > > > > > > [snip] > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Wed Nov 7 03:20:40 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Wed, 7 Nov 2012 03:20:40 +0000 Subject: sticky module was -> RE: AJP In-Reply-To: References: Message-ID: Hi I did, but to me it seems to be a question of support. So from my earlier inquiries into this on the list. The suggestion was to move to sticky ? which is listed in wiki page for add ons. The jvm_route and the ajp module are not. So far I have been advised on the ML that sticky is the way to go. I am setting up some tests for this Like to get a feel for how other people are doing this. I am already using the development branch to get SSL working properly! I am not 100% sure about using 3rd party modules (that are not on the wiki page ? why that should give me a better feeling I am not sure. Makes it easier to point it out to the boss !) Alex From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of ??? Sent: Wednesday, 7 November 2012 1:23 PM To: nginx at nginx.org Subject: Re: sticky module was -> RE: AJP Have a look at this module? http://code.google.com/p/nginx-upstream-jvm-route/ Thanks. 2012/11/7 Alex Samad - Yieldbroker > Hi Late reply, wondering if any one has any example of setting up stick module and using jsessionid cookie ? Alex > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of J?r?me Loyet > Sent: Friday, 26 October 2012 7:24 AM > To: nginx at nginx.org > Subject: Re: AJP > > 2012/10/25 Alex Samad - Yieldbroker >: > > Hi > > > > [snip] > > > >> >> Behalf Of J?r?me Loyet > >> >> Sent: Friday, 26 October 2012 3:35 AM > >> >> To: nginx at nginx.org > >> >> Subject: Re: AJP > > [snip] > > > >> > > >> > Still my question though how did you deal with stickiness? > >> > >> http://code.google.com/p/nginx-sticky-module/ > > > > So just for my understand what is the difference between this 3rd party > add on and the ajp 3rd party addon. > > > > I understand one is on the wiki and one is not. But does that mean the > sticky module is supported as well ? > > > > Does stick respect jsession cookie. So if server fails will the connection go to > new server and then eventually bounce back to the restored old server ? > > > sticky module does not deal with jsession cookie. It creates a cookie on its > own which indicates which backend has been used and nginx will always use > the backend from the cookie. > > If no cookie are sent (first time), nginx will use standard round robin to > choose one server and then send back a cookie indicating which backend > server has been used. > > If the backend from the server is down, classic round robin takes place and a > new cookie is sent with the new backend to use. > > see the main page of http://code.google.com/p/nginx-sticky-module/ > with the explanation and a smal schematic which is far better than any > explaination > > > > > > > > >> > >> simple and efficient ! works like a charms without any configuration > >> on the tomcat side and no need to setup complex session sharing > >> system on the tomcat side > > > > That's good like KISS > > > > [snip] > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Wed Nov 7 04:23:14 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 7 Nov 2012 12:23:14 +0800 Subject: sticky module was -> RE: AJP In-Reply-To: References: Message-ID: Hey, That's the opensource module. It may not as official as the nginx native module. It all depends the module author's ability and time. You should use it after you test it completely. It's free. If you want the official support, you can contact with nginx company: http://nginx.com/support.html 2012/11/7 Alex Samad - Yieldbroker > Hi**** > > ** ** > > I did, but to me it seems to be a question of support.**** > > ** ** > > So from my earlier inquiries into this on the list. The suggestion was to > move to sticky ? which is listed in wiki page for add ons.**** > > ** ** > > The jvm_route and the ajp module are not.**** > > ** ** > > So far I have been advised on the ML that sticky is the way to go.**** > > ** ** > > I am setting up some tests for this**** > > ** ** > > Like to get a feel for how other people are doing this.**** > > ** ** > > I am already using the development branch to get SSL working properly! *** > * > > ** ** > > I am not 100% sure about using 3rd party modules (that are not on the > wiki page ? why that should give me a better feeling I am not sure. Makes > it easier to point it out to the boss !)**** > > ** ** > > ** ** > > Alex**** > > ** ** > > *From:* nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] *On > Behalf Of *??? > *Sent:* Wednesday, 7 November 2012 1:23 PM > *To:* nginx at nginx.org > *Subject:* Re: sticky module was -> RE: AJP**** > > ** ** > > Have a look at this module? > http://code.google.com/p/nginx-upstream-jvm-route/**** > > ** ** > > Thanks.**** > > 2012/11/7 Alex Samad - Yieldbroker **** > > Hi > > Late reply, wondering if any one has any example of setting up stick > module and using jsessionid cookie ? > > Alex > > > -----Original Message----- > > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > > Behalf Of J?r?me Loyet > > Sent: Friday, 26 October 2012 7:24 AM > > To: nginx at nginx.org > > Subject: Re: AJP > > > > 2012/10/25 Alex Samad - Yieldbroker : > > > Hi > > > > > > [snip] > > > > > >> >> Behalf Of J?r?me Loyet > > >> >> Sent: Friday, 26 October 2012 3:35 AM > > >> >> To: nginx at nginx.org > > >> >> Subject: Re: AJP > > > [snip] > > > > > >> > > > >> > Still my question though how did you deal with stickiness? > > >> > > >> http://code.google.com/p/nginx-sticky-module/ > > > > > > So just for my understand what is the difference between this 3rd party > > add on and the ajp 3rd party addon. > > > > > > I understand one is on the wiki and one is not. But does that mean the > > sticky module is supported as well ? > > > > > > Does stick respect jsession cookie. So if server fails will the > connection go to > > new server and then eventually bounce back to the restored old server ? > > > > > > sticky module does not deal with jsession cookie. It creates a cookie on > its > > own which indicates which backend has been used and nginx will always use > > the backend from the cookie. > > > > If no cookie are sent (first time), nginx will use standard round robin > to > > choose one server and then send back a cookie indicating which backend > > server has been used. > > > > If the backend from the server is down, classic round robin takes place > and a > > new cookie is sent with the new backend to use. > > > > see the main page of http://code.google.com/p/nginx-sticky-module/ > > with the explanation and a smal schematic which is far better than any > > explaination > > > > > > > > > > > > > > >> > > >> simple and efficient ! works like a charms without any configuration > > >> on the tomcat side and no need to setup complex session sharing > > >> system on the tomcat side > > > > > > That's good like KISS > > > > > > [snip] > > > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx**** > > > > **** > > ** ** > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao**** > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Wed Nov 7 04:57:12 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Wed, 7 Nov 2012 04:57:12 +0000 Subject: sticky module was -> RE: AJP In-Reply-To: References: Message-ID: Hi Yes that?s my problem ?test it completely? Does that mean nginx will support a 3rd party module ? A From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of ??? Sent: Wednesday, 7 November 2012 3:23 PM To: nginx at nginx.org Subject: Re: sticky module was -> RE: AJP Hey, That's the opensource module. It may not as official as the nginx native module. It all depends the module author's ability and time. You should use it after you test it completely. It's free. If you want the official support, you can contact with nginx company: http://nginx.com/support.html 2012/11/7 Alex Samad - Yieldbroker > Hi I did, but to me it seems to be a question of support. So from my earlier inquiries into this on the list. The suggestion was to move to sticky ? which is listed in wiki page for add ons. The jvm_route and the ajp module are not. So far I have been advised on the ML that sticky is the way to go. I am setting up some tests for this Like to get a feel for how other people are doing this. I am already using the development branch to get SSL working properly! I am not 100% sure about using 3rd party modules (that are not on the wiki page ? why that should give me a better feeling I am not sure. Makes it easier to point it out to the boss !) Alex From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of ??? Sent: Wednesday, 7 November 2012 1:23 PM To: nginx at nginx.org Subject: Re: sticky module was -> RE: AJP Have a look at this module? http://code.google.com/p/nginx-upstream-jvm-route/ Thanks. 2012/11/7 Alex Samad - Yieldbroker > Hi Late reply, wondering if any one has any example of setting up stick module and using jsessionid cookie ? Alex > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of J?r?me Loyet > Sent: Friday, 26 October 2012 7:24 AM > To: nginx at nginx.org > Subject: Re: AJP > > 2012/10/25 Alex Samad - Yieldbroker >: > > Hi > > > > [snip] > > > >> >> Behalf Of J?r?me Loyet > >> >> Sent: Friday, 26 October 2012 3:35 AM > >> >> To: nginx at nginx.org > >> >> Subject: Re: AJP > > [snip] > > > >> > > >> > Still my question though how did you deal with stickiness? > >> > >> http://code.google.com/p/nginx-sticky-module/ > > > > So just for my understand what is the difference between this 3rd party > add on and the ajp 3rd party addon. > > > > I understand one is on the wiki and one is not. But does that mean the > sticky module is supported as well ? > > > > Does stick respect jsession cookie. So if server fails will the connection go to > new server and then eventually bounce back to the restored old server ? > > > sticky module does not deal with jsession cookie. It creates a cookie on its > own which indicates which backend has been used and nginx will always use > the backend from the cookie. > > If no cookie are sent (first time), nginx will use standard round robin to > choose one server and then send back a cookie indicating which backend > server has been used. > > If the backend from the server is down, classic round robin takes place and a > new cookie is sent with the new backend to use. > > see the main page of http://code.google.com/p/nginx-sticky-module/ > with the explanation and a smal schematic which is far better than any > explaination > > > > > > > > >> > >> simple and efficient ! works like a charms without any configuration > >> on the tomcat side and no need to setup complex session sharing > >> system on the tomcat side > > > > That's good like KISS > > > > [snip] > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 7 09:01:09 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Nov 2012 09:01:09 +0000 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: References: Message-ID: <20121107090109.GP17159@craic.sysops.org> On Wed, Nov 07, 2012 at 12:43:40PM +1100, Tharanga Abeyseela wrote: Hi there, > I need to add basic auth to my home page (index.html) (Served by > nginx) and other directories resides on tomcat7. is there anyway i > can add only authentication to index.html . "location = /index.html" will only apply to /index.html. Put your configuration in there. > i was using the following > nginx configuration. > > server { > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > index index.html; > root /var/www/; > server_name xxxxxxxx; > } Are you sure? server{}, and then location{} outside it? > location / { > auth_basic "Restricted"; > auth_basic_user_file /var/www/.htpass; > } > > location /next { > proxy_pass http://localhost:8080/next; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_max_temp_file_size 0; > } > > when i try to add the above config, it asks for the user/pass, but it > asks for the user/pass when i try to access /next. When I try the above config, it does what you say you want. (It should challenge for authentication only for any request that does not begin "/next".) What is the output you get for curl -i http://xxxxxxxx/ and curl -i http://xxxxxxxx/next ? Are you sure that you are using this server{} block in nginx? Are you sure that the server on localhost:8080 is not redirecting you to /? > but i need to add > authentication only to index.html. problem is using the root > directory, so all requests will be tunneled through root and prompted > for a password. but is there any way i can restrict access only to > index.html, once it authenticated, users will be able to access tomcat > paths . I'm not quite sure what you mean by that last bit. If you require authentication for /index.html, then you can't expect authentication credentials to be sent for the tomcat paths. So the user will get to the tomcat paths whether or not they first authenticated, at least as far as nginx is concerned. f -- Francis Daly francis at daoine.org From i.hailperin at heinlein-support.de Wed Nov 7 09:49:22 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Wed, 07 Nov 2012 10:49:22 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned Message-ID: <509A2EA2.60704@heinlein-support.de> Hi, after restarting nginx I find 2012/11/07 10:24:02 [alert] 23635#0: 512 worker_connections are not enough 2012/11/07 10:24:02 [alert] 23636#0: 512 worker_connections are not enough 2012/11/07 10:24:04 [alert] 23618#0: cache manager process 23635 exited with fatal code 2 and cannot be respawned in my logs. It seems like this error came up after adding more then 2500 virtual hosts, each consisting of two server blocks, one for http, and one for https. Now I don't quite understand these messages. In my nginx.conf I have user www-data; worker_processes 16; pid /var/run/nginx.pid; worker_rlimit_nofile 65000; events { worker_connections 2000; use epoll; # multi_accept on; } so that should be enough worker_connections. Why am I still getting this message? For the other message regarding the cache manger, I found this http://www.ruby-forum.com/topic/519162 thread, where Maxim Dounin suggests that it results from the kernel not supporting eventfd(). But as far as I understand this is only an issue with kernels bevore 2.6.18. I use 2.6.32 and my kernel config clearly states CONFIG_EVENTFD=y Here is the nginx version and configure options: root at debian:~# nginx -V nginx version: nginx/1.2.4 TLS SNI support enabled configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 Any ideas? Isaac From sb at waeme.net Wed Nov 7 10:12:00 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Wed, 7 Nov 2012 14:12:00 +0400 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: References: Message-ID: On 7 Nov2012, at 05:46 , Tharanga Abeyseela wrote: > Hi Guys, > > I need to add basic auth to my home page (index.html) (Served by > nginx) and other directories resides on tomcat7. is there anyway i > can add only authentication to index.html . i was using the following > nginx configuration. > > server { > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > index index.html; > root /var/www/; > server_name xxxxxxxx; > } > > location / { > auth_basic "Restricted"; > auth_basic_user_file /var/www/.htpass; > } > > > > location /next { > proxy_pass http://localhost:8080/next; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_max_temp_file_size 0; > } > > when i try to add the above config, it asks for the user/pass, but it > asks for the user/pass when i try to access /next. but i need to add > authentication only to index.html. problem is using the root > directory, so all requests will be tunneled through root and prompted > for a password. but is there any way i can restrict access only to > index.html, once it authenticated, users will be able to access tomcat > paths . You have to add auth_basic off; to "location /next" or move auth_basic* directives to "location = /index.html" From anebi at iguanait.com Wed Nov 7 14:49:56 2012 From: anebi at iguanait.com (Ali Nebi) Date: Wed, 7 Nov 2012 16:49:56 +0200 Subject: load balancing between 3 servers where nginx and apache installed on each of them Message-ID: Hello, I have 3 servers and on each server is installed nginx and apache. I want to set these 3 servers load balance each other. One of them will be main server that will accept the traffic from clients and balance to the other 2 servers. I have a failover ip that can be switched at any time to the other 2 servers in case that main server fail totally. So I have a question about nginx configuration related to upstreams part. Should I proceed that way: 1. On Main server i set upstreas like that: server 127.0.0.1:8080; (this is apache installed on same machine) server server2; (this is second server, on it on port 80 listens nginx) server server3 backup; (this is third server, on it on port 80 listens nginx) 2. On second and third servers i set only 1 upstream: server 127.0.0.1:8080; (i set only this and not the others to prevent loop) Is this correct way to do it or there is a better way? Awaiting for your reply. Thank you in advance. Best regards, Ali -- Iguana Information Technologies, SL Calle L?pez de Hoyos 35, 1? 28002 Madrid, Espa?a (Spain) +34 915569100 +34 649336286 http://www.iguanait.com/ Advertencia ----------- Este mensaje contiene informaci?n privada y confidencial. Si usted no es el destinatario, no est? autorizado a leer, imprimir, retener, copiar o difundir este mensaje o parte de ?l. En caso de que usted reciba este mensaje por error debe borrarlo. Gracias. Confidentiality notice ---------------------- This message contains private and confidential information. If you are not the named addressee, you are not authorized to read, print, retain, copy or disseminate this message or any part of it. In case you receive this message by mistake you should delete it. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 7 15:00:00 2012 From: nginx-forum at nginx.us (corby) Date: Wed, 07 Nov 2012 10:00:00 -0500 Subject: 502 Bad Gateway - Wordpress In-Reply-To: <4a627e25ce0e647a21e2a29f440bf97e.NginxMailingListEnglish@forum.nginx.org> References: <4a627e25ce0e647a21e2a29f440bf97e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Glad to hear it JerW. I'm having a similar problem now. If you don't mind could you please share what you did to fix it ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,222873,232578#msg-232578 From lists at ruby-forum.com Wed Nov 7 20:40:49 2012 From: lists at ruby-forum.com (Dave Nolan) Date: Wed, 07 Nov 2012 21:40:49 +0100 Subject: resolver does not re-resolve upstream servers after initial cache Message-ID: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> Using nginx 1.2.3-stable on Ubuntu 12.04 I have the following config: http { resolver 172.16.0.23 valid=300s; resolver_timeout 10s; upstream myupstream { server example.com; } server { listen 80 default_server; location / { proxy_pass http://myupstream$request_uri; proxy_pass_request_headers on; proxy_set_header Host $host; } } } As I understand it, without the resolver config, nginx will resolve example.com's IP once on load and cache it until it stops or fully reloads the config. With the resolver config above, nginx should re-resolve the IP every 5mins. However, this is not happening: I can watch tcpdump -n udp port 53 but I see no re-resolution taking place. I'd love to know how to fix this. Any advice appreciated thanks! -- Posted via http://www.ruby-forum.com/. From oliviermo75 at gmail.com Wed Nov 7 21:45:02 2012 From: oliviermo75 at gmail.com (Olivier Morel) Date: Wed, 7 Nov 2012 22:45:02 +0100 Subject: i have a issue with a server block (virtual host) Message-ID: hy everybody I try to create a virtual host (server block) on nginx, and i have two issue. When i m going to* http://localhost* ,i see my website But when i try to go to the virtual host (server block) * http://localhost/websvn* I get an error because he try to find a webpage with the name *websvn* on my first website. I don't understand why ?? */conf/nginx.conf* http { > include mime.types; > passenger_root > /usr/local/centOs/rvm/gems/ruby-1.9.3-p125/gems/passenger-3.0.11; # it's > for Ruby on Rails > passenger_ruby > /usr/local/centOs/rvm/bin/ruby-1.9.3-p125; > # it's for Ruby on Rails > passenger_max_pool_size 10; > # it's for Ruby on Rails > > default_type application/octet-stream; > > sendfile on; > ## TCP options > tcp_nopush on; > tcp_nodelay on; > ## Timeout > keepalive_timeout 65; > > types_hash_max_size 2048; > server_names_hash_bucket_size 128; > proxy_cache_path /mnt/donner/nginx/cache levels=1:2 keys_zone=one:10m; > gzip on; > server_tokens off; > > # =======add server block ====== ## > > include /usr/local/centOs/nginx/vhosts-available/*; > include /usr/local/centOs/nginx/vhosts-available/*.conf; > > > server { > listen 80; > #server_name localhost; > passenger_enabled on; > passenger_use_global_queue on; > > error_log /home/logs/nginx/error.log; > access_log /home/logs/nginx/access-global.log; > } } > */vhost-enable/srv-websvn.conf* server { > > listen 80; > server_name srv-websvn; > # rewrite ^/(.*) http://localhost/$1 permanent; > > error_log /home/logs/websvn/error.log; > access_log /home/logs/websvn/access.log; > } > > server { > > listen 80; > server_name srv-websvn; > > error_log /home/logs/websvn/error.log; > access_log /home/logs/websvn/access.log; > location / { > > root /home/sites_web/websvn/; > index index.html; > #try_files $uri $uri/ /index.html; > > } > } > -- Cordialement Olivier Morel tel : 06.62.25.03.77 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tharanga.abeyseela at gmail.com Thu Nov 8 00:04:39 2012 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Thu, 8 Nov 2012 11:04:39 +1100 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: <20121107090109.GP17159@craic.sysops.org> References: <20121107090109.GP17159@craic.sysops.org> Message-ID: Hi Francis, thanks for the reply. actually it inside the server block :-) , i managed to resolve the issue using a rewrite rule as follows location /demo/ { auth_basic "Restricted"; auth_basic_user_file /var/www/demo/.htpass; error_page 404 = @redirect; # rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; } location @redirect { rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; } is it possible to enable nginx authentication before proxy_pass to tomcat ? cheers, Tharanga On Wed, Nov 7, 2012 at 8:01 PM, Francis Daly wrote: > On Wed, Nov 07, 2012 at 12:43:40PM +1100, Tharanga Abeyseela wrote: > > Hi there, > >> I need to add basic auth to my home page (index.html) (Served by >> nginx) and other directories resides on tomcat7. is there anyway i >> can add only authentication to index.html . > > "location = /index.html" will only apply to /index.html. Put your > configuration in there. > >> i was using the following >> nginx configuration. >> >> server { >> access_log /var/log/nginx/access.log; >> error_log /var/log/nginx/error.log; >> index index.html; >> root /var/www/; >> server_name xxxxxxxx; >> } > > Are you sure? > > server{}, and then location{} outside it? > >> location / { >> auth_basic "Restricted"; >> auth_basic_user_file /var/www/.htpass; >> } >> >> location /next { >> proxy_pass http://localhost:8080/next; >> proxy_redirect off; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_max_temp_file_size 0; >> } >> >> when i try to add the above config, it asks for the user/pass, but it >> asks for the user/pass when i try to access /next. > > When I try the above config, it does what you say you want. > > (It should challenge for authentication only for any request that does not > begin "/next".) > > What is the output you get for > > curl -i http://xxxxxxxx/ > > and > > curl -i http://xxxxxxxx/next > > ? Are you sure that you are using this server{} block in nginx? Are you > sure that the server on localhost:8080 is not redirecting you to /? > >> but i need to add >> authentication only to index.html. problem is using the root >> directory, so all requests will be tunneled through root and prompted >> for a password. but is there any way i can restrict access only to >> index.html, once it authenticated, users will be able to access tomcat >> paths . > > I'm not quite sure what you mean by that last bit. If you require > authentication for /index.html, then you can't expect authentication > credentials to be sent for the tomcat paths. So the user will get to > the tomcat paths whether or not they first authenticated, at least as > far as nginx is concerned. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ru at nginx.com Thu Nov 8 07:34:06 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 8 Nov 2012 11:34:06 +0400 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> Message-ID: <20121108073406.GA23051@lo0.su> On Wed, Nov 07, 2012 at 09:40:49PM +0100, Dave Nolan wrote: > Using nginx 1.2.3-stable on Ubuntu 12.04 I have the following config: > > http { > resolver 172.16.0.23 valid=300s; > resolver_timeout 10s; > > upstream myupstream { > server example.com; > } > > server { > listen 80 default_server; > > location / { > proxy_pass http://myupstream$request_uri; > proxy_pass_request_headers on; > proxy_set_header Host $host; > } > } > } > > As I understand it, without the resolver config, nginx will resolve > example.com's IP once on load and cache it until it stops or fully > reloads the config. > > With the resolver config above, nginx should re-resolve the IP every > 5mins. This is not the way how it works. A run-time resolving only takes place if URL specified in "proxy_pass" contains variables, AND the resulting server name doesn't match any of the configured server groups (using the "upstream" directives). This is documented here: http://nginx.org/r/proxy_pass In your case, the server name is always "myupstream" and since it matches "upstream myupstream", no run-time resolving takes place. > However, this is not happening: I can watch tcpdump -n udp port 53 but I > see no re-resolution taking place. > > I'd love to know how to fix this. Any advice appreciated thanks! proxy_pass http://example.com$request_uri; will resolve "example.com" dynamically (assuming of course there's no "upstream example.com" in configuration). From lists at ruby-forum.com Thu Nov 8 10:38:42 2012 From: lists at ruby-forum.com (Dave Nolan) Date: Thu, 08 Nov 2012 11:38:42 +0100 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <20121108073406.GA23051@lo0.su> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> <20121108073406.GA23051@lo0.su> Message-ID: <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> Ruslan Ermilov wrote in post #1083512: > On Wed, Nov 07, 2012 at 09:40:49PM +0100, Dave Nolan wrote: >> server { >> As I understand it, without the resolver config, nginx will resolve >> example.com's IP once on load and cache it until it stops or fully >> reloads the config. >> >> With the resolver config above, nginx should re-resolve the IP every >> 5mins. > > This is not the way how it works. > > A run-time resolving only takes place if URL specified in "proxy_pass" > contains variables, AND the resulting server name doesn't match any of > the configured server groups (using the "upstream" directives). This > is documented here: http://nginx.org/r/proxy_pass > > In your case, the server name is always "myupstream" and since it > matches "upstream myupstream", no run-time resolving takes place. What's the reason behind this? It feels like that, even if proxy_pass defers to the server group, resolver config should be respected for servers defined within the group. > >> However, this is not happening: I can watch tcpdump -n udp port 53 but I >> see no re-resolution taking place. >> >> I'd love to know how to fix this. Any advice appreciated thanks! > > proxy_pass http://example.com$request_uri; > > will resolve "example.com" dynamically (assuming of course there's > no "upstream example.com" in configuration). Thanks very much for your help. If I switch to using example.com directly in the proxy_pass, I lose the flexibility of server groups. Is there any way of dynamically re-resolving servers in upstream server group? -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Nov 8 11:07:04 2012 From: nginx-forum at nginx.us (guilhem) Date: Thu, 08 Nov 2012 06:07:04 -0500 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> References: <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> Message-ID: <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> Dave Nolan Wrote: > >> However, this is not happening: I can watch tcpdump -n udp port 53 > but I > >> see no re-resolution taking place. > >> > >> I'd love to know how to fix this. Any advice appreciated thanks! > > > > proxy_pass http://example.com$request_uri; > > > > will resolve "example.com" dynamically (assuming of course there's > > no "upstream example.com" in configuration). > > Thanks very much for your help. > > If I switch to using example.com directly in the proxy_pass, I lose > the > flexibility of server groups. Is there any way of dynamically > re-resolving servers in upstream server group? Hi, I can add that I lost my production servers last night because of this behavior. * I use dynamic dns name for flexibility for almost all my servers * I put one backend server to maintenance so the name was removed by dns (after a TTL) * corosync manage my nginx servers... and can restart them. You can easily understand what append : corosync detect a problem, fail back to another server, restart nginx but nginx can't resolved a backend host in upstream so it failed to start (with "[emerg] host not found in upstream"). All my nginx servers have been down because of this. Just like you, I can't remove my server groups but I want the flexibility of DNS resolving (Not failing at start and TTL). -- Guilhem Lettron Youscribe - www.youscribe.com Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232585,232596#msg-232596 From sb at waeme.net Thu Nov 8 12:05:48 2012 From: sb at waeme.net (Sergey Budnevitch) Date: Thu, 8 Nov 2012 16:05:48 +0400 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> References: <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55ACB974-B712-4C7E-BEB4-2711B3A58B4B@waeme.net> On 8 Nov2012, at 15:07 , guilhem wrote: > > All my nginx servers have been down because of this. > > Just like you, I can't remove my server groups but I want the flexibility of > DNS resolving (Not failing at start and TTL). If you want the flexibility of DNS resolving and safeguard yourself against DNS failure you should either add hostnames to /etc/hosts or start local named/NSD/etc with appropriate slave zones. From oliviermo75 at gmail.com Thu Nov 8 13:37:50 2012 From: oliviermo75 at gmail.com (Olivier Morel) Date: Thu, 8 Nov 2012 14:37:50 +0100 Subject: i have a issue with virtual host Message-ID: hy everybody I try to create a virtual host (server block) on nginx, and i have two issue. When i m going to* http://localhost* ,i see my website But when i try to go to the virtual host (server block) * http://localhost/websvn* I get an error because he try to find a webpage with the name *websvn* on my first website. I don't understand why ?? */conf/nginx.conf* http { > include mime.types; > passenger_root > /usr/local/centOs/rvm/gems/ruby-1.9.3-p125/gems/passenger-3.0.11; # it's > for Ruby on Rails > passenger_ruby > /usr/local/centOs/rvm/bin/ruby-1.9.3-p125; > # it's for Ruby on Rails > passenger_max_pool_size 10; > # it's for Ruby on Rails > > default_type application/octet-stream; > > sendfile on; > ## TCP options > tcp_nopush on; > tcp_nodelay on; > ## Timeout > keepalive_timeout 65; > > types_hash_max_size 2048; > server_names_hash_bucket_size 128; > proxy_cache_path /mnt/donner/nginx/cache levels=1:2 keys_zone=one:10m; > gzip on; > server_tokens off; > > # =======add server block ====== ## > > include /usr/local/centOs/nginx/vhosts-available/*; > include /usr/local/centOs/nginx/vhosts-available/*.conf; > > > server { > listen 80; > #server_name localhost; > passenger_enabled on; > passenger_use_global_queue on; > > error_log /home/logs/nginx/error.log; > access_log /home/logs/nginx/access-global.log; > } } > */vhost-enable/srv-websvn.conf* server { listen 80; server_name srv-websvn; # rewrite ^/(.*) http://localhost/$1 permanent; error_log /home/logs/websvn/error.log; access_log /home/logs/websvn/access.log; } server { listen 80; server_name srv-websvn; error_log /home/logs/websvn/error.log; access_log /home/logs/websvn/access.log; location / { root /home/sites_web/websvn/; index index.html; #try_files $uri $uri/ /index.html; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 8 13:40:24 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Nov 2012 13:40:24 +0000 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: References: <20121107090109.GP17159@craic.sysops.org> Message-ID: <20121108134024.GT17159@craic.sysops.org> On Thu, Nov 08, 2012 at 11:04:39AM +1100, Tharanga Abeyseela wrote: Hi there, > thanks for the reply. actually it inside the server block :-) , Good to hear. > i managed to resolve the issue using a rewrite rule as follows > > location /demo/ { > auth_basic "Restricted"; > auth_basic_user_file /var/www/demo/.htpass; > error_page 404 = @redirect; > # rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; > } > > location @redirect { > rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; > } That seems very complicated. I'm a bit unclear on what issue this configuration resolves. It looks to me like it will (a) insist that anyone accessing things below /demo/ are challenged for credentials; and (b) allow anyone access to anything other than /demo/ without providing credentials. Can you describe what it is that you want, and what it is that you do not want? I'm not sure whether the x.x.x.x above is "this server" or "some other server"; and I'm not sure what happened to "/next" from the original configuration. > is it possible to enable nginx authentication before proxy_pass to tomcat ? Yes. Put the "auth_basic" in the same location as the "proxy_pass". If that doesn't do what you want, then I'm afraid that I don't understand what it is that you want. f -- Francis Daly francis at daoine.org From i.hailperin at heinlein-support.de Thu Nov 8 13:54:16 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Thu, 08 Nov 2012 14:54:16 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509A2EA2.60704@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> Message-ID: <509BB988.40607@heinlein-support.de> So I also tried Version 1.2.1 form debian backports, which produced the same error. I tried on opensuse 12.2, which worked fine: nginx version: nginx/1.0.15 built by gcc 4.7.1 20120713 [gcc-4_7-branch revision 189457] (SUSE Linux) TLS SNI support enabled configure arguments: --prefix=/usr/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/lib/nginx/tmp/ --http-proxy-temp-path=/var/lib/nginx/proxy/ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi/ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi/ --http-scgi-temp-path=/var/lib/nginx/scgi/ --user=nginx --group=nginx --with-rtsig_module --with-select_module --with-poll_module --with-ipv6 --with-file-aio --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-perl=/usr/bin/perl --with-mail --with-mail_ssl_module --with-pcre --with-libatomic --add-module=passenger/ext/nginx --with-md5=/usr --with-sha1=/usr --with-cc-opt='-fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fstack-protector' So could it be that this is an issue with the 1.2 Series? Isaac On 11/07/2012 10:49 AM, Isaac Hailperin wrote: > Hi, > > after restarting nginx I find > > 2012/11/07 10:24:02 [alert] 23635#0: 512 worker_connections are not enough > 2012/11/07 10:24:02 [alert] 23636#0: 512 worker_connections are not enough > 2012/11/07 10:24:04 [alert] 23618#0: cache manager process 23635 exited > with fatal code 2 and cannot be respawned > > in my logs. It seems like this error came up after adding more then 2500 > virtual hosts, each consisting of two server blocks, one for http, and > one for https. > > Now I don't quite understand these messages. In my nginx.conf I have > user www-data; > worker_processes 16; > pid /var/run/nginx.pid; > worker_rlimit_nofile 65000; > > events { > worker_connections 2000; > use epoll; > # multi_accept on; > } > > so that should be enough worker_connections. Why am I still getting this > message? > > For the other message regarding the cache manger, I found this > http://www.ruby-forum.com/topic/519162 > thread, where Maxim Dounin suggests that it results from the kernel not > supporting eventfd(). But as far as I understand this is only an issue > with kernels bevore 2.6.18. I use 2.6.32 and my kernel config clearly > states > CONFIG_EVENTFD=y > > Here is the nginx version and configure options: > root at debian:~# nginx -V > nginx version: nginx/1.2.4 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 > > Any ideas? > > Isaac > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From igor at sysoev.ru Thu Nov 8 14:09:20 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 8 Nov 2012 18:09:20 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509A2EA2.60704@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> Message-ID: On Nov 7, 2012, at 13:49 , Isaac Hailperin wrote: > Hi, > > after restarting nginx I find > > 2012/11/07 10:24:02 [alert] 23635#0: 512 worker_connections are not enough > 2012/11/07 10:24:02 [alert] 23636#0: 512 worker_connections are not enough > 2012/11/07 10:24:04 [alert] 23618#0: cache manager process 23635 exited with fatal code 2 and cannot be respawned > > in my logs. It seems like this error came up after adding more then 2500 virtual hosts, each consisting of two server blocks, one for http, and one for https. > > Now I don't quite understand these messages. In my nginx.conf I have > user www-data; > worker_processes 16; > pid /var/run/nginx.pid; > worker_rlimit_nofile 65000; > > events { > worker_connections 2000; > use epoll; > # multi_accept on; > } > > so that should be enough worker_connections. Why am I still getting this message? > > For the other message regarding the cache manger, I found this > http://www.ruby-forum.com/topic/519162 > thread, where Maxim Dounin suggests that it results from the kernel not supporting eventfd(). But as far as I understand this is only an issue with kernels bevore 2.6.18. I use 2.6.32 and my kernel config clearly states > CONFIG_EVENTFD=y These message have no relation to eventfd(). A process with pid of 23636 is probably cache loader. Both cache manager and loader do not use configured worker_connection number since they do not process connections at all. However, they need one connection slot to communicate with master process. 512 connections may be taken by listen directives if they use different addreses, or by resolvers if you defined a resolver in every virtual host. A quick workaround is to define just a single resovler at http level. -- Igor Sysoev http://nginx.com/support.html From i.hailperin at heinlein-support.de Thu Nov 8 15:13:12 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Thu, 08 Nov 2012 16:13:12 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: References: <509A2EA2.60704@heinlein-support.de> Message-ID: <509BCC08.7020505@heinlein-support.de> > > These message have no relation to eventfd(). > > A process with pid of 23636 is probably cache loader. Both cache manager and loader > do not use configured worker_connection number since they do not process connections > at all. However, they need one connection slot to communicate with master process. > > 512 connections may be taken by listen directives if they use different addreses, > or by resolvers if you defined a resolver in every virtual host. > A quick workaround is to define just a single resovler at http level. Hm, there were no resolvers defined in the virtual hosts. But I tried to add resolver 127.0.0.1; to my https section, but that did not help. Also, if resolvers would be the problem, it should also happen with other nginx builds, like the one I tested on opensuse, see my reply earlier today. Here is my config, including one vhost: user www-data; worker_processes 16; pid /var/run/nginx.pid; worker_rlimit_nofile 65000; events { use epoll; worker_connections 2000; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; #error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Because we have a lot of server_names, we need to increase # server_names_hash_bucket_size # (http://nginx.org/en/docs/http/server_names.html) server_names_hash_max_size 32000; server_names_hash_bucket_size 1024; # raise default values for php client_max_body_size 20M; client_body_buffer_size 128k; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /var/www3/acme_cache/load_balancer/upstream.conf; include /etc/nginx/sites-enabled/*; index index.html index.htm ; ## # Proxy Settings ## # include hostname in request to backend proxy_set_header Host $host; # only honor internal Caching policies proxy_ignore_headers X-Accel-Expires Expires Cache-Control; # hopefully fixes an issue with cache manager dying resolver 127.0.0.1; } Then in /etc/nginx/sites-enabled/ there is eg server { server_name www.acme.eu acmeblabla.eu; listen 45100; ssl on; ssl_certificate /etc/nginx/ssl/acme_eu.crt; ssl_certificate_key /etc/nginx/ssl/acme_eu.key; access_log /var/log/www/m77/acmesystems_de/log/access.log; error_log /var/log/nginx/vhost_error.log; proxy_cache acme-cache; proxy_cache_key "$scheme$host$proxy_host$uri$is_args$args"; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 10m; location ~* \.(jpg|gif|png|css|js) { try_files $uri @proxy; } location @proxy { proxy_pass https://backend-www.acme.eu_p45100; } location / { proxy_pass https://backend-www.acme.eu_p45100; } } upstream backend-www.acme.eu_p45100 { server 10.1.1.25:45100; server 10.1.1.26:45100; server 10.1.1.27:45100; server 10.1.1.28:45100; server 10.1.1.15:45100; server 10.1.1.18:45100; server 10.1.1.20:45100; server 10.1.1.36:45100; server 10.1.1.39:45100; server 10.1.1.40:45100; server 10.1.1.42:45100; server 10.1.1.21:45100; server 10.1.1.22:45100; server 10.1.1.23:45100; server 10.1.1.29:45100; server 10.1.1.50:45100; server 10.1.1.43:45100; server 10.1.1.45:45100; server 10.1.1.46:45100; server 10.1.1.19:45100; server 10.1.1.10:45100; } Isaac From i.hailperin at heinlein-support.de Thu Nov 8 15:54:05 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Thu, 08 Nov 2012 16:54:05 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509BB988.40607@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BB988.40607@heinlein-support.de> Message-ID: <509BD59D.1020806@heinlein-support.de> > So could it be that this is an issue with the 1.2 Series? Ok, this is not the case: I tried 1.0.15 build by hand on debian, and have the same issue. Isaac From edho at myconan.net Thu Nov 8 15:58:44 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 8 Nov 2012 22:58:44 +0700 Subject: i have a issue with virtual host In-Reply-To: References: Message-ID: On Thu, Nov 8, 2012 at 8:37 PM, Olivier Morel wrote: > > hy everybody > I try to create a virtual host (server block) on nginx, and i have two issue. > > When i m going to http://localhost ,i see my website > But when i try to go to the virtual host (server block) http://localhost/websvn > I get an error because he try to find a webpage with the name websvn on my first website. > I don't understand why ?? > I think you're confused between virtualhost and subdirectory. From tharanga.abeyseela at gmail.com Thu Nov 8 23:06:57 2012 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Fri, 9 Nov 2012 10:06:57 +1100 Subject: nginx auth_basic with proxy pass to tomcat In-Reply-To: <20121108134024.GT17159@craic.sysops.org> References: <20121107090109.GP17159@craic.sysops.org> <20121108134024.GT17159@craic.sysops.org> Message-ID: Hi, when the user enter http:///x.x.x.x/ - it will give forbidden message. (i removed index.html to demo directory) im giving the url to users as follows http://x.x.x.x/demo/ - so this will ask for user/pass - thats what i wanted to do after entering to above url - user will be landed to my index.html - it has all tomcat paths to connect (just hyper links) x.x.x.x is the same server - not a different server i'm not redirecting to different server. everything is done on the same server. i agree. the rewrite is complicated for a small authentication handling. but other methods didn't work for me :) thanks for your help and suggestions :) cheers, Tharanga now issue is when the user enter http://x.x.x.x/next it bypass the nginx auth and going to tomcat path with out any authentication. may be i need to configure that on web.xml. i prefer to configure nginx auth for all tomcat and nginx paths. actually tomcat is the front-end server hadnles/redirects client request to appropriate server. On Fri, Nov 9, 2012 at 12:40 AM, Francis Daly wrote: > On Thu, Nov 08, 2012 at 11:04:39AM +1100, Tharanga Abeyseela wrote: > > Hi there, > >> thanks for the reply. actually it inside the server block :-) , > > Good to hear. > >> i managed to resolve the issue using a rewrite rule as follows >> >> location /demo/ { >> auth_basic "Restricted"; >> auth_basic_user_file /var/www/demo/.htpass; >> error_page 404 = @redirect; >> # rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; >> } >> >> location @redirect { >> rewrite ^/demo/(.*)$ http://x.x.x.x/$1 permanent; >> } > > That seems very complicated. > > I'm a bit unclear on what issue this configuration resolves. It looks > to me like it will (a) insist that anyone accessing things below /demo/ > are challenged for credentials; and (b) allow anyone access to anything > other than /demo/ without providing credentials. > > Can you describe what it is that you want, and what it is that you do > not want? I'm not sure whether the x.x.x.x above is "this server" or > "some other server"; and I'm not sure what happened to "/next" from the > original configuration. > >> is it possible to enable nginx authentication before proxy_pass to tomcat ? > > Yes. Put the "auth_basic" in the same location as the "proxy_pass". > > If that doesn't do what you want, then I'm afraid that I don't understand > what it is that you want. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From oliviermo75 at gmail.com Thu Nov 8 23:19:09 2012 From: oliviermo75 at gmail.com (Olivier Morel) Date: Fri, 9 Nov 2012 00:19:09 +0100 Subject: create a virtual-host (server block) Message-ID: hello I try to put a virtual host (server block) on my website, i have some issue. I have read the nginx tutorial for creat server block but after a lot of time i can't get my server block and i dont understand why. Do you have something to configure on /etc/hosts or resolv.conf ? this is my server block for my website. */vhosts-anable/test.conf* *server { listen 80; server_name sd-32587.dedibox.fr;# localhost; # == Document ROOT root /home/sites_web/www.test.com/public; rails_env development; passenger_enabled on; } * And this is my conf for the virtual-host. */vhosts-anable/pipou.conf* *server { listen 80 ; server_name websvn.sd-32587.dedibox.fr; #websvn.localhost *.localhost; # rewrite ^/(.*) http://localhost/$1 permanent; # server_name_in_redirect off; # rewrite ^ http://websvn.sd-32587.dedibox.fr$1 permanent; root /home/sites_web/websvn; index index.html; error_log /home/logs/websvn/error.log; access_log /home/logs/websvn/access.log; }* when i m going to *websvn.sd-32587.dedibox.fr* i have an error . Could you help me please thk a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 8 23:36:04 2012 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Nov 2012 23:36:04 +0000 Subject: create a virtual-host (server block) In-Reply-To: References: Message-ID: <20121108233604.GA24351@craic.sysops.org> On Fri, Nov 09, 2012 at 12:19:09AM +0100, Olivier Morel wrote: Hi there, > I try to put a virtual host (server block) on my website, i have some issue. > > I have read the nginx tutorial for creat server block but after a lot of > time i can't get my server block and i dont understand why. What do you do; what do you see; what do you expect to see? Be specific. > Do you have something to configure on /etc/hosts or resolv.conf ? http://nginx.org/en/docs/http/request_processing.html When your client makes a http request of nginx, nginx must decide which of your server{} blocks to use to process the request. For your config, it looks like nginx does this purely based on the http Host: header. > *server { > listen 80; > server_name sd-32587.dedibox.fr;# localhost; If you do curl -H Host:sd-32587.dedibox.fr -i http://your-server-name-or-ip/ you should get content from here. If sd-32587.dedibox.fr resolves to your server ip, then that is the same as curl -i http://sd-32587.dedibox.fr/ and the same as using any browser to fetch it. > *server { > listen 80 ; > server_name websvn.sd-32587.dedibox.fr; #websvn.localhost Same thing here: curl -H Host:websvn.sd-32587.dedibox.fr -i http://your-server-name-or-ip/ or curl -i http://websvn.sd-32587.dedibox.fr/ > when i m going to *websvn.sd-32587.dedibox.fr* i have an error . What error? I suggest using "curl" for testing, as above. It tends not to hide the real error from you. My guess is "unable to resolve the hostname". > Could you help me please If the problem is that your client can't resolve the hostname, you must make your client be able to resolve the hostname. Either put it in dns (so everyone who uses that dns service can see it); or put it in the local "hosts" file that your browser uses (so that only you can easily see it). Good luck, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Nov 9 03:08:46 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Nov 2012 07:08:46 +0400 Subject: How to view Nginx wiki history? In-Reply-To: References: <201211041757.45381.ne@vbart.ru> Message-ID: <20121109030846.GW40452@mdounin.ru> Hello! On Sun, Nov 04, 2012 at 02:07:32PM +0000, Jonathan Matthews wrote: > On 4 November 2012 13:57, Valentin V. Bartenev wrote: > > On Saturday 03 November 2012 10:08:48 howard chen wrote: > >> Re: How to view Nginx wiki history? > > > > You should register an account to view the history: > > http://wiki.nginx.org/index.php?title=Special:UserLogin > > That really sucks. Can it be configured differently to allow > non-authenticated users access to page histories? Non-authenticated users actually have access, but there is no link provided. E.g. http://wiki.nginx.org/index.php?title=Main&action=history > As we're on the topic of historic documentation across a versioned > piece of software like nginx, I'd like to put in a plea here for > something to be worked out to expose the mapping of config directives > to the version in which they were introduced, or when their behaviours > changed - in a *structured* format. > > IMHO that's a really major documentation flaw in both the wiki and the > official documentation, as the inline notes about version > applicability are both sparse and not formalised/structured at all > usefully. In official documentation we've introduced formal tag, e.g. http://trac.nginx.org/nginx/browser/nginx_org/xml/en/docs/http/ngx_http_ssl_module.xml#L219 Though in more or less nontrivial cases like new directive parameters and so on - inline notes are unavoidable. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Nov 9 03:12:12 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Nov 2012 07:12:12 +0400 Subject: No upstream_response_time in access log file In-Reply-To: References: Message-ID: <20121109031212.GX40452@mdounin.ru> Hello! On Sat, Nov 03, 2012 at 09:13:30AM -0400, jonefee wrote: > i use nginx as reverse proxy server , and php-fpm as fast cgi upstream. > i found some access in nginx access log file doesen't has a > "$upstream_response_time" value but a "-" character instead ,,,, and also > has "$response_time" value. > why ? > > for example: > > 71.213.141.240 - 31/Oct/2012:13:09:34 +0800 POST > /php/xyz/iface/?key=be7fc0cdf0afbfedff1e09ec6443823a&device_id=351870052329449&network=1&ua=LT18i&os=2.3.4&version=3.7&category_id=2&f_ps=10&s=5&ps=30&pn=1&pcat=2&up_tm=1351655451272 > HTTP/1.1 499 0($body_bytes_sent) - Dalvik/1.4.0 (Linux; U; Android 2.3.4; > LT18i Build/4.0.2.A.0.62) 21($content_length) 2.448($request_time) > -($upstream_response_time) - - - > > the response_time is 2.448,,,but no upstream_response_time,,,, the http > response code is 499,,,,,dosen't mean that nginx did not finish the > "connection" and php-fpm even has no chance to "see" the access ? The connection was closed by a client before nginx got anything from upstream, hence nginx closed upstream connection as well. The 499 status code logged to indicate this, and there is no $upstream_response_time for the same reason. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Nov 9 03:34:01 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 9 Nov 2012 07:34:01 +0400 Subject: fastcgi_intercept_errors & error_page In-Reply-To: References: Message-ID: <20121109033401.GY40452@mdounin.ru> Hello! On Tue, Nov 06, 2012 at 01:12:05AM +0800, howard chen wrote: > According to the doc: > http://wiki.nginx.org/HttpFastcgiModule#fastcgi_intercept_errors > > Note: You need to explicitly define the error_page handler for this for it > to be useful. As Igor says, "nginx does not intercept an error if there is > no custom handler for it it does not show its default pages. This allows to > intercept some errors, while passing others as are." > > Actually I still can't understand the exact meaning, so I have done some > experimentd. > > 1. turn on fastcgi_intercept_errors, > - in the backend php/fcgi send 404 header, > - set the error_page (php) > > Result: nginx use the default error template > > 2. turn off fastcgi_intercept_errors, > - in the backend php/fcgi send 404 header > - set the error_page (php) > > Result: now the custom error_page (php) is being used. It's unclear how you did your experiments, but results are all wrong - most likely, you missed something. E.g. in (1) I would assume request to the error_page resulted in another 404, which in turn resulted in default error page returned. > So it seems to me that* fastcgi_intercept_errors should be off and set > the error_page *if I need to specify custom error handler, is > this interoperation correct? No. The fastcgi_intercept_errors directive is off by default, and this means that nginx won't try to do anything with responses returned by backends as long as they are valid. If you want nginx to change response returned by a backend, you have to switch on fastcgi_intercept_errors and configure appropriate error_page. Example configuration: error_page 404 /errors/404.html; location / { fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; ... } location /errors/ { # static } With the above config if fastcgi backend returns 404, the /errors/404.html will be returned instead (assuming it exists). -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 9 03:46:50 2012 From: nginx-forum at nginx.us (arnoldguo) Date: Thu, 08 Nov 2012 22:46:50 -0500 Subject: How many connections can nginx handle? Message-ID: I have a server running proxy_pass the http request(mainly GET,html,jpg etc.) to backend server(20+ server running nginx cache). this server with 32 core CPU,64G RAM,2X 10G NIC,RHEL 6.3,backend server with 1000M NIC. When the load raise to 300k connections,3Gbps network traffic,new connection can not be serviced,cannot connect to nginx's port,but the system load is low(~1.00),and has many free memory, no errorlog in /var/log/message,has some nginx errorlog like"upstream timed out to backend server", but the backend's service is normal,load is not hight... i have tune tcp config,can handle 2000K conn,it seems that nginx cannot handle so much connection? nginx config: worker_processes 32;use worker_cpu_affinity; keepalive 32 to backend servers, proxy_buffering on or off has the same result... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232623,232623#msg-232623 From ryanchan404 at gmail.com Fri Nov 9 04:00:23 2012 From: ryanchan404 at gmail.com (Ryan Chan) Date: Fri, 9 Nov 2012 12:00:23 +0800 Subject: Can I really assume location match start with / ? Message-ID: >From the doc, it said.. ========== location / { # matches any query, since all queries begin with /, but regular ========== But I am wondering, if it always start with /, why not use ^ instead? e.g. location ^/ { location ^/documents/ { From edho at myconan.net Fri Nov 9 04:01:45 2012 From: edho at myconan.net (Edho Arief) Date: Fri, 9 Nov 2012 11:01:45 +0700 Subject: Can I really assume location match start with / ? In-Reply-To: References: Message-ID: On Fri, Nov 9, 2012 at 11:00 AM, Ryan Chan wrote: > From the doc, it said.. > > ========== > > location / { > # matches any query, since all queries begin with /, but regular > ========== > > > But I am wondering, if it always start with /, why not use ^ instead? > That'd require regexp. From oliviermo75 at gmail.com Fri Nov 9 09:00:06 2012 From: oliviermo75 at gmail.com (Olivier Morel) Date: Fri, 9 Nov 2012 10:00:06 +0100 Subject: create a virtual-host (server block) In-Reply-To: <20121108233604.GA24351@craic.sysops.org> References: <20121108233604.GA24351@craic.sysops.org> Message-ID: > > If the problem is that your client can't resolve the hostname, you must > make your client be able to resolve the hostname. > > Either put it in dns (so everyone who uses that dns service can see it); > or put it in the local "hosts" file that your browser uses (so that only > you can easily see it). > THK francis !! i have just inform my domaine name in the host server, and not on my PC. lol.. it's working now [?][?] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 328.png Type: image/png Size: 569 bytes Desc: not available URL: From lists at ruby-forum.com Fri Nov 9 09:14:40 2012 From: lists at ruby-forum.com (Dave Nolan) Date: Fri, 09 Nov 2012 10:14:40 +0100 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <55ACB974-B712-4C7E-BEB4-2711B3A58B4B@waeme.net> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> <20121108073406.GA23051@lo0.su> <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> <55ACB974-B712-4C7E-BEB4-2711B3A58B4B@waeme.net> Message-ID: <851e15012743ca75791645403c68b793@ruby-forum.com> Sergey Budnevitch wrote in post #1083548: > On 8 Nov2012, at 15:07 , guilhem wrote: > >> >> All my nginx servers have been down because of this. >> >> Just like you, I can't remove my server groups but I want the flexibility of >> DNS resolving (Not failing at start and TTL). > > > If you want the flexibility of DNS resolving and safeguard yourself > against > DNS failure you should either add hostnames to /etc/hosts or start > local named/NSD/etc with appropriate slave zones. Sure, that kind of flexibility needs more tools than just nginx. But actually it's a question about consistency, right? Even if proxy_pass defers to the server group, resolver config should be respected for servers defined within the group, just like for everything else. I'm just interested in why it's not, and whether there are plans to change it. We might be interested in sponsoring this work. -- Posted via http://www.ruby-forum.com/. From andrew at nginx.com Fri Nov 9 09:23:38 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 9 Nov 2012 13:23:38 +0400 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <851e15012743ca75791645403c68b793@ruby-forum.com> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> <20121108073406.GA23051@lo0.su> <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> <55ACB974-B712-4C7E-BEB4-2711B3A58B4B@waeme.net> <851e15012743ca75791645403c68b793@ruby-forum.com> Message-ID: <84F62AA0-260E-488B-890C-5376F443FA54@nginx.com> Dave, On Nov 9, 2012, at 1:14 PM, Dave Nolan wrote: > Sergey Budnevitch wrote in post #1083548: >> On 8 Nov2012, at 15:07 , guilhem wrote: >> >>> >>> All my nginx servers have been down because of this. >>> >>> Just like you, I can't remove my server groups but I want the flexibility of >>> DNS resolving (Not failing at start and TTL). >> >> >> If you want the flexibility of DNS resolving and safeguard yourself >> against >> DNS failure you should either add hostnames to /etc/hosts or start >> local named/NSD/etc with appropriate slave zones. > > Sure, that kind of flexibility needs more tools than just nginx. > > But actually it's a question about consistency, right? > > Even if proxy_pass defers to the server group, resolver config should be > respected for servers defined within the group, just like for everything > else. I'm just interested in why it's not, and whether there are plans > to change it. We might be interested in sponsoring this work. That's the current (and maybe already "legacy") design of nginx upstream configuration. Unfortunately there's no quick solution but we actually appreciate your feedback a lot and will try to incorporate a better upstream design in the future releases. From andrew at nginx.com Fri Nov 9 09:24:31 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Fri, 9 Nov 2012 13:24:31 +0400 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <851e15012743ca75791645403c68b793@ruby-forum.com> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> <20121108073406.GA23051@lo0.su> <2e1116fbc0bbef620bdc0b3cbfc9d3cf@ruby-forum.com> <0aeba29f8d54cc43d5df2e967be7a677.NginxMailingListEnglish@forum.nginx.org> <55ACB974-B712-4C7E-BEB4-2711B3A58B4B@waeme.net> <851e15012743ca75791645403c68b793@ruby-forum.com> Message-ID: <5E567FE8-0E7A-45A0-8340-AD67152658BE@nginx.com> On Nov 9, 2012, at 1:14 PM, Dave Nolan wrote: > to change it. We might be interested in sponsoring this work. Can we get this conversation in nginx-inquiries at nginx dot com please? From maxim at nginx.com Fri Nov 9 09:47:10 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 09 Nov 2012 13:47:10 +0400 Subject: How many connections can nginx handle? In-Reply-To: References: Message-ID: <509CD11E.4050704@nginx.com> On 11/9/12 7:46 AM, arnoldguo wrote: > I have a server running proxy_pass the http request(mainly GET,html,jpg > etc.) to backend server(20+ server running nginx cache). > this server with 32 core CPU,64G RAM,2X 10G NIC,RHEL 6.3,backend server with > 1000M NIC. > When the load raise to 300k connections,3Gbps network traffic,new connection > can not be serviced,cannot connect to nginx's port,but the system load is > low(~1.00),and has many free memory, > no errorlog in /var/log/message,has some nginx errorlog like"upstream timed > out to backend server", > but the backend's service is normal,load is not hight... > i have tune tcp config,can handle 2000K conn,it seems that nginx cannot > handle so much connection? > How did you do that and how did you check that the OS can handle 2M tcp connections? > nginx config: > worker_processes 32;use worker_cpu_affinity; > keepalive 32 to backend servers, > proxy_buffering on or off has the same result... > It's almost always about tuning various OS limits not nginx. As for nginx, this link is a good start: http://nginx.org/en/docs/ngx_core_module.html#worker_connections -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 9 13:11:40 2012 From: nginx-forum at nginx.us (piyushbj) Date: Fri, 09 Nov 2012 08:11:40 -0500 Subject: nginx not showing client ip Message-ID: Dear All, I have setup nginx reverse proxy with IIS webserver. Client ---> Nginx RP ---> IIS 7 But i am unable to get client ip on IIS server's log. i am getting Ngnix server's ip. Can anybody help me to achieve this ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232641,232641#msg-232641 From i.hailperin at heinlein-support.de Fri Nov 9 13:15:07 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Fri, 09 Nov 2012 14:15:07 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509BCC08.7020505@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> Message-ID: <509D01DB.5060607@heinlein-support.de> Refining my observations: Its not an issue of version or OS ... that were wrong obersvations on my side. But: Of the approx. 5000 vhost, there are about 1000 who do ssl, each on a different (high) port. So without the ssl vhosts, I have about 1000 open files for nginx (lsof |grep nginx|wc) And nginx runs fine. With the ssl vhosts, I have about 17000 open files. And I get the errors. Does that ring a bell somewhere? Also, 17000 is about 16 (amount of worker processes) * 1000 (num ssl hosts) + 1000 (nofiles without ssl). I also wonder where the 512 worker_connections from the error message come from. There is no such number in my config. Is it hardcoded somewhere? Isaac On 11/08/2012 04:13 PM, Isaac Hailperin wrote: > >> >> These message have no relation to eventfd(). >> >> A process with pid of 23636 is probably cache loader. Both cache >> manager and loader >> do not use configured worker_connection number since they do not >> process connections >> at all. However, they need one connection slot to communicate with >> master process. >> >> 512 connections may be taken by listen directives if they use >> different addreses, >> or by resolvers if you defined a resolver in every virtual host. >> A quick workaround is to define just a single resovler at http level. > Hm, there were no resolvers defined in the virtual hosts. But I tried to > add > resolver 127.0.0.1; > to my https section, but that did not help. > > Also, if resolvers would be the problem, it should also happen with > other nginx builds, like the one I tested on opensuse, see my reply > earlier today. > > Here is my config, including one vhost: > > user www-data; > worker_processes 16; > pid /var/run/nginx.pid; > worker_rlimit_nofile 65000; > > events { > use epoll; > worker_connections 2000; > # multi_accept on; > } > > http { > > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > # server_tokens off; > > # server_names_hash_bucket_size 64; > # server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log debug; > #error_log /var/log/nginx/error.log; > > ## > # Gzip Settings > ## > > gzip on; > gzip_disable "msie6"; > > # gzip_vary on; > # gzip_proxied any; > # gzip_comp_level 6; > # gzip_buffers 16 8k; > # gzip_http_version 1.1; > # gzip_types text/plain text/css application/json > application/x-javascript text/xml application/xml application/xml+rss > text/javascript; > # Because we have a lot of server_names, we need to increase > # server_names_hash_bucket_size > # (http://nginx.org/en/docs/http/server_names.html) > server_names_hash_max_size 32000; > server_names_hash_bucket_size 1024; > > # raise default values for php > client_max_body_size 20M; > client_body_buffer_size 128k; > > ## > # Virtual Host Configs > ## > include /etc/nginx/conf.d/*.conf; > include /var/www3/acme_cache/load_balancer/upstream.conf; > include /etc/nginx/sites-enabled/*; > > index index.html index.htm ; > > ## > # Proxy Settings > ## > > # include hostname in request to backend > proxy_set_header Host $host; > > # only honor internal Caching policies > proxy_ignore_headers X-Accel-Expires Expires Cache-Control; > > # hopefully fixes an issue with cache manager dying > resolver 127.0.0.1; > } > > > Then in /etc/nginx/sites-enabled/ there is eg > server > { > server_name www.acme.eu acmeblabla.eu; > listen 45100; > ssl on; > ssl_certificate /etc/nginx/ssl/acme_eu.crt; > ssl_certificate_key /etc/nginx/ssl/acme_eu.key; > access_log /var/log/www/m77/acmesystems_de/log/access.log; > error_log /var/log/nginx/vhost_error.log; > proxy_cache acme-cache; > proxy_cache_key "$scheme$host$proxy_host$uri$is_args$args"; > proxy_cache_valid 200 302 60m; > proxy_cache_valid 404 10m; > > location ~* \.(jpg|gif|png|css|js) > { > try_files $uri @proxy; > } > > location @proxy > { > proxy_pass https://backend-www.acme.eu_p45100; > } > > location / > { > proxy_pass https://backend-www.acme.eu_p45100; > } > > } > upstream backend-www.acme.eu_p45100 > { > server 10.1.1.25:45100; > server 10.1.1.26:45100; > server 10.1.1.27:45100; > server 10.1.1.28:45100; > server 10.1.1.15:45100; > server 10.1.1.18:45100; > server 10.1.1.20:45100; > server 10.1.1.36:45100; > server 10.1.1.39:45100; > server 10.1.1.40:45100; > server 10.1.1.42:45100; > server 10.1.1.21:45100; > server 10.1.1.22:45100; > server 10.1.1.23:45100; > server 10.1.1.29:45100; > server 10.1.1.50:45100; > server 10.1.1.43:45100; > server 10.1.1.45:45100; > server 10.1.1.46:45100; > server 10.1.1.19:45100; > server 10.1.1.10:45100; > } > > > Isaac > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Nov 9 14:49:11 2012 From: nginx-forum at nginx.us (zuckbin) Date: Fri, 09 Nov 2012 09:49:11 -0500 Subject: nginx_status not found Message-ID: <5754c941c31d1e6f4d3fa0d13d5898dd.NginxMailingListEnglish@forum.nginx.org> Hi, i use nginx as a proxy for apache on Gentoo to deliver only static contents When i try to acces http://myhost/nginx_status i got an error: Not Found The requested URL /nginx_status was not found on this server. nginx -V nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi --http-scgi-temp-path=/var/tmp/nginx/scgi --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --with-ipv6 --with-pcre --without-http_autoindex_module --without-http_browser_module --without-http_charset_module --without-http_empty_gif_module --without-http_fastcgi_module --without-http_geo_module --without-http_limit_req_module --without-http_limit_zone_module --without-http_map_module --without-http_memcached_module --without-http_referer_module --without-http_scgi_module --without-http_ssi_module --without-http_split_clients_module --without-http_upstream_ip_hash_module --without-http_userid_module --without-http_uwsgi_module --with-http_stub_status_module --with-http_ssl_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user=nginx --group=nginx i don't understand... Thanks for your help Bye. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232644,232644#msg-232644 From contact at jpluscplusm.com Fri Nov 9 15:22:41 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 9 Nov 2012 15:22:41 +0000 Subject: nginx not showing client ip In-Reply-To: References: Message-ID: On 9 November 2012 13:11, piyushbj wrote: > Dear All, > > I have setup nginx reverse proxy with IIS webserver. > > Client ---> Nginx RP ---> IIS 7 > > But i am unable to get client ip on IIS server's log. i am getting Ngnix > server's ip. That's because the nginx server *is* the one making the connection to the IIS instance. You need to teach your application about the X-Forwarded-For (X-F-F) HTTP header, or tell IIS to log this header in place of the client IP. Failing that, you could look at using something like https://devcentral.f5.com/weblogs/Joe/archive/2009/12/23/x-forwarded-for-http-module-for-iis7-source-included.aspx , but that's kind of overkill IMHO. For all of these, you'll need to add the client's IP to the X-F-F header in nginx. A bit of very simple googling should teach you how to achieve this. HTH, Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From contact at jpluscplusm.com Fri Nov 9 15:24:56 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 9 Nov 2012 15:24:56 +0000 Subject: nginx_status not found In-Reply-To: <5754c941c31d1e6f4d3fa0d13d5898dd.NginxMailingListEnglish@forum.nginx.org> References: <5754c941c31d1e6f4d3fa0d13d5898dd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Without you showing us the useful parts of your nginx config, we can't help. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From maxim at nginx.com Fri Nov 9 16:27:37 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 09 Nov 2012 20:27:37 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D01DB.5060607@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> Message-ID: <509D2EF9.1040008@nginx.com> On 11/9/12 5:15 PM, Isaac Hailperin wrote: [...] > I also wonder where the 512 worker_connections from the error > message come from. There is no such number in my config. Is it > hardcoded somewhere? > http://nginx.org/en/docs/ngx_core_module.html#worker_connections It's a default number of worker_connections. -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From cmfileds at gmail.com Fri Nov 9 18:08:47 2012 From: cmfileds at gmail.com (CM Fields) Date: Fri, 9 Nov 2012 13:08:47 -0500 Subject: SPDY sockets staying open indefinitely Message-ID: We are seeing an issue with Nginx SPDY sockets staying open indefinitely. I understand that the SPDY patch is still beta and not ready for production. This server is a test box which is used as a mirror of the production system accepting public traffic. This is the source build we are using: Nginx 1.3.8 OpenSSL 1.0.1c SPDY patch.spdy-52.txt OpenBSD v5.2 (default install) NOTE: If nginx is built without the SPDY patch there are _NO_ issues at all and the server works like normal with keep alive connections. The SPDY problem occurs when the offending client connects and they make a lot of SPDY error requests. Each of these requests takes a "worker_connections" slot. If the client makes more requests then the worker_connections directive allows the web server denies all new connections. Essentially, this one ip has triggered a denial of service. What we are seeing in the logs is a client connecting and triggering a bunch of "SPDY ERROR while SSL handshaking" error messages in the error_log. There is no mention of the client ip in the access_log. According to the Pf logs and the firewall state table the connections from the offending ip have been closed for hours. This server gets around 2000 connections per hour and only this one ip triggered this issue in 24 hours of operation. Sadly, I do not have packet dumps of this traffic so I do not know exactly what the client sent. Perhaps this is a badly written client or a malicious scan. I do not know. The only way to clear the open sockets and allow new connections is to completely restart the nginx daemon. The nginx.conf for this server is very basic. It just serves a few static resources. We tried adding some timeouts to help clear the open sockets to no avail. ## Timeouts client_body_timeout 10; client_header_timeout 10; keepalive_timeout 180 180; send_timeout 10; reset_timedout_connection on; Here is the error_log with the client ip. Server is listening on localhost: 2012/11/08 01:42:59 [warn] 25619#0: *5792 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:00 [alert] 25619#0: *5792 spdy inflate() failed: -5 while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:00 [warn] 25619#0: *5792 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:10 [warn] 25619#0: *5796 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:11 [alert] 25619#0: *5796 spdy inflate() failed: -5 while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:11 [warn] 25619#0: *5796 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:22 [warn] 25619#0: *5803 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:23 [alert] 25619#0: *5803 spdy inflate() failed: -5 while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:23 [warn] 25619#0: *5803 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:33 [warn] 25619#0: *5804 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:34 [alert] 25619#0: *5804 spdy inflate() failed: -5 while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 2012/11/08 01:43:34 [warn] 25619#0: *5804 SPDY ERROR while SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 Here is a fstat of the open sockets. These sockets will never close until the nginx daemon is restarted. # fstat -n | grep inter daemon nginx 25619 6* internet stream tcp 0xfffffe821e98fd20 127.0.0.1:80 daemon nginx 25619 7* internet stream tcp 0xfffffe820cb94970 127.0.0.1:443 daemon nginx 25619 108* internet stream tcp 0xfffffe820e7354f0 127.0.0.1:443 <-- 210.77.27.XX:2406 daemon nginx 25619 113* internet stream tcp 0x0 *:0 daemon nginx 25619 115* internet stream tcp 0x0 *:0 daemon nginx 25619 117* internet stream tcp 0x0 *:0 daemon nginx 25619 118* internet stream tcp 0x0 *:0 daemon nginx 25619 123* internet stream tcp 0x0 *:0 daemon nginx 25619 124* internet stream tcp 0xfffffe82075ab2d0 127.0.0.1:443 <-- 210.77.27.XX:2284 daemon nginx 25619 125* internet stream tcp 0xfffffe82075ab730 127.0.0.1:443 <-- 210.77.27.XX:2332 daemon nginx 25619 126* internet stream tcp 0xfffffe8211d3dd90 127.0.0.1:443 <-- 210.77.27.XX:2358 daemon nginx 25619 127* internet stream tcp 0xfffffe8217f1c8d8 127.0.0.1:443 <-- 210.77.27.XX:2386 daemon nginx 25619 128* internet stream tcp 0xfffffe820e7352c0 127.0.0.1:443 <-- 210.77.27.XX:2376 daemon nginx 25619 133* internet stream tcp 0xfffffe8217f1c018 127.0.0.1:443 <-- 210.77.27.XX:2309 daemon nginx 25619 141* internet stream tcp 0xfffffe820e735950 127.0.0.1:443 <-- 210.77.27.XX:2413 daemon nginx 25619 142* internet stream tcp 0xfffffe820cb942e0 127.0.0.1:443 <-- 210.77.27.XX:2315 daemon nginx 25619 143* internet stream tcp 0xfffffe8211d3d930 127.0.0.1:443 <-- 210.77.27.XX:2434 If it helps, memory usage and CPU time for the daemon is low: PID USERNAME PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND 5611 daemon 2 0 17M 6032K sleep/1 kqread 1:52 0.10% nginx 7311 root 18 0 13M 1040K idle pause 0:00 0.00% nginx I just wanted to report this issue in case someone else had the same problem. I wish I had more information, but at this time I am not sure what the client is sending to cause the hanging open sockets. If there is any other information that will help or if a new patch needs testing please tell me. Have a great weekend! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 9 18:26:35 2012 From: nginx-forum at nginx.us (mevans336) Date: Fri, 09 Nov 2012 13:26:35 -0500 Subject: Internal 503 Redirect Issues? Message-ID: <3649ec49eef97d1a713617c1445b2518.NginxMailingListEnglish@forum.nginx.org> Hello Everyone, I am attempting to configure an internal redirect for any 502/503 errors in the event our backend servers are down. Here is the relevant part of my configuration: location / { add_header X-Frame-Options SAMEORIGIN; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header; proxy_intercept_errors on; error_page 502 503 504 @errors; proxy_pass http://jboss_dev_servers; } location @errors { root /usr/share/nginx/html; try_files $uri /50x.html =503; } } If I disable JBoss on my backend server, the 500 status code is intercepted and the 50x.html page is displayed. However, I am only receiving a partial page render. It's almost like part of my .css file is not being sent to the browser, as I get the text and images, but none of my CSS formatting. Here is my /usr/share/nginx/html directory structure, all users have read access to the directories and files: -rw-r--r-- 1 root root 8035 2012-11-09 13:06 50x.html drwxr-xr-x 2 root root 4096 2012-11-09 12:21 images drwxr-xr-x 2 root root 4096 2012-11-09 12:21 js drwxr-xr-x 2 root root 4096 2012-11-09 12:21 styles Here is a screenshot of how the page renders: https://dl.dropbox.com/u/1540472/oops.png I think my @error location is configured incorrectly, but I've been beating my head against a wall for an hour or so. Any ideas? Thanks, Matt Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232655,232655#msg-232655 From i.hailperin at heinlein-support.de Fri Nov 9 18:33:11 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Fri, 09 Nov 2012 19:33:11 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D2EF9.1040008@nginx.com> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> Message-ID: <509D4C67.8090807@heinlein-support.de> On 11/09/2012 05:27 PM, Maxim Konovalov wrote: > On 11/9/12 5:15 PM, Isaac Hailperin wrote: > [...] >> I also wonder where the 512 worker_connections from the error >> message come from. There is no such number in my config. Is it >> hardcoded somewhere? >> > http://nginx.org/en/docs/ngx_core_module.html#worker_connections > > It's a default number of worker_connections. Yes, but if I specify a differen number, like http://www.ruby-forum.com/topic/4407591#1083581 this should be different. Now this could lead to the conclusion, that nginx is not reading that file, but nginx -t clearly says so. Also, if I introduce syntactic errors in that file, nginx complains. As Igor Sysoev suggested earlier http://www.ruby-forum.com/topic/4407591#1083572 the worker_connection parameter might not be related, since also cache manager and loader use connections. If these are hard coded to a max of 512, this might be the cause: there are exactly 1002 vhosts which each listen on a different port. Now its not 1024, which would be 512*2, but may be there is some overhead which makes me come to this limit? If my thinking is correct (?), is there a way to overcome this limit? (other then using just one port for ssl ... it would mean using different ip addresses, which would have the same effect, I guess?) Any thoughts on this are welcome. Isaac From maxim at nginx.com Fri Nov 9 18:52:15 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 09 Nov 2012 22:52:15 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D4C67.8090807@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> Message-ID: <509D50DF.7000900@nginx.com> What does 'cat /proc/sys/fs/file-rn' say? -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From i.hailperin at heinlein-support.de Fri Nov 9 18:56:06 2012 From: i.hailperin at heinlein-support.de (Isaac Hailperin) Date: Fri, 09 Nov 2012 19:56:06 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D50DF.7000900@nginx.com> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> <509D50DF.7000900@nginx.com> Message-ID: <509D51C6.3020702@heinlein-support.de> On 11/09/2012 07:52 PM, Maxim Konovalov wrote: > What does 'cat /proc/sys/fs/file-rn' say? > cat /proc/sys/fs/file-nr 1696 0 205028 From cmfileds at gmail.com Fri Nov 9 19:07:02 2012 From: cmfileds at gmail.com (CM Fields) Date: Fri, 9 Nov 2012 14:07:02 -0500 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: Message-ID: Little more information. Here we see netstat in CLOSE_WAIT state to the offending ip. # netstat | grep tcp tcp 0 0 localhost.https 210.77.27.XX.2284 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2309 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2315 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2332 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2358 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2376 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2386 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2406 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2413 CLOSE_WAIT tcp 0 0 localhost.https 210.77.27.XX.2434 CLOSE_WAIT On Fri, Nov 9, 2012 at 1:08 PM, CM Fields wrote: > We are seeing an issue with Nginx SPDY sockets staying open indefinitely. > I understand that the SPDY patch is still beta and not ready for > production. This server is a test box which is used as a mirror of the > production system accepting public traffic. > > This is the source build we are using: > Nginx 1.3.8 > OpenSSL 1.0.1c > SPDY patch.spdy-52.txt > OpenBSD v5.2 (default install) > > NOTE: If nginx is built without the SPDY patch there are _NO_ issues at > all and the server works like normal with keep alive connections. > > The SPDY problem occurs when the offending client connects and they make a > lot of SPDY error requests. Each of these requests takes a > "worker_connections" slot. If the client makes more requests then the > worker_connections directive allows the web server denies all new > connections. Essentially, this one ip has triggered a denial of service. > > What we are seeing in the logs is a client connecting and triggering a > bunch of "SPDY ERROR while SSL handshaking" error messages in the > error_log. There is no mention of the client ip in the access_log. > According to the Pf logs and the firewall state table the connections from > the offending ip have been closed for hours. This server gets around 2000 > connections per hour and only this one ip triggered this issue in 24 hours > of operation. Sadly, I do not have packet dumps of this traffic so I do not > know exactly what the client sent. Perhaps this is a badly written client > or a malicious scan. I do not know. > > The only way to clear the open sockets and allow new connections is to > completely restart the nginx daemon. > > The nginx.conf for this server is very basic. It just serves a few static > resources. We tried adding some timeouts to help clear the open sockets to > no avail. > > ## Timeouts > client_body_timeout 10; > client_header_timeout 10; > keepalive_timeout 180 180; > send_timeout 10; > reset_timedout_connection on; > > > Here is the error_log with the client ip. Server is listening on localhost: > > 2012/11/08 01:42:59 [warn] 25619#0: *5792 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:00 [alert] 25619#0: *5792 spdy inflate() failed: -5 while > SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:00 [warn] 25619#0: *5792 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:10 [warn] 25619#0: *5796 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:11 [alert] 25619#0: *5796 spdy inflate() failed: -5 while > SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:11 [warn] 25619#0: *5796 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:22 [warn] 25619#0: *5803 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:23 [alert] 25619#0: *5803 spdy inflate() failed: -5 while > SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:23 [warn] 25619#0: *5803 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:33 [warn] 25619#0: *5804 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:34 [alert] 25619#0: *5804 spdy inflate() failed: -5 while > SSL handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > 2012/11/08 01:43:34 [warn] 25619#0: *5804 SPDY ERROR while SSL > handshaking, client: 210.77.27.XX, server: 127.0.0.1:443 > > > > Here is a fstat of the open sockets. These sockets will never close until > the nginx daemon is restarted. > > # fstat -n | grep inter > > daemon nginx 25619 6* internet stream tcp 0xfffffe821e98fd20 > 127.0.0.1:80 > daemon nginx 25619 7* internet stream tcp 0xfffffe820cb94970 > 127.0.0.1:443 > daemon nginx 25619 108* internet stream tcp 0xfffffe820e7354f0 > 127.0.0.1:443 <-- 210.77.27.XX:2406 > daemon nginx 25619 113* internet stream tcp 0x0 *:0 > daemon nginx 25619 115* internet stream tcp 0x0 *:0 > daemon nginx 25619 117* internet stream tcp 0x0 *:0 > daemon nginx 25619 118* internet stream tcp 0x0 *:0 > daemon nginx 25619 123* internet stream tcp 0x0 *:0 > daemon nginx 25619 124* internet stream tcp 0xfffffe82075ab2d0 > 127.0.0.1:443 <-- 210.77.27.XX:2284 > daemon nginx 25619 125* internet stream tcp 0xfffffe82075ab730 > 127.0.0.1:443 <-- 210.77.27.XX:2332 > daemon nginx 25619 126* internet stream tcp 0xfffffe8211d3dd90 > 127.0.0.1:443 <-- 210.77.27.XX:2358 > daemon nginx 25619 127* internet stream tcp 0xfffffe8217f1c8d8 > 127.0.0.1:443 <-- 210.77.27.XX:2386 > daemon nginx 25619 128* internet stream tcp 0xfffffe820e7352c0 > 127.0.0.1:443 <-- 210.77.27.XX:2376 > daemon nginx 25619 133* internet stream tcp 0xfffffe8217f1c018 > 127.0.0.1:443 <-- 210.77.27.XX:2309 > daemon nginx 25619 141* internet stream tcp 0xfffffe820e735950 > 127.0.0.1:443 <-- 210.77.27.XX:2413 > daemon nginx 25619 142* internet stream tcp 0xfffffe820cb942e0 > 127.0.0.1:443 <-- 210.77.27.XX:2315 > daemon nginx 25619 143* internet stream tcp 0xfffffe8211d3d930 > 127.0.0.1:443 <-- 210.77.27.XX:2434 > > > If it helps, memory usage and CPU time for the daemon is low: > > PID USERNAME PRI NICE SIZE RES STATE WAIT TIME CPU > COMMAND > 5611 daemon 2 0 17M 6032K sleep/1 kqread 1:52 > 0.10% nginx > 7311 root 18 0 13M 1040K idle pause > 0:00 0.00% nginx > > > I just wanted to report this issue in case someone else had the same > problem. I wish I had more information, but at this time I am not sure what > the client is sending to cause the hanging open sockets. If there is any > other information that will help or if a new patch needs testing please > tell me. > > Have a great weekend! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.heinlein at heinlein-support.de Fri Nov 9 19:36:28 2012 From: p.heinlein at heinlein-support.de (Peer Heinlein) Date: Fri, 09 Nov 2012 20:36:28 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D4C67.8090807@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> Message-ID: <509D5B3C.4010204@heinlein-support.de> Am 09.11.2012 19:33, schrieb Isaac Hailperin: I did several hours of testing today with Isaac and there are two problems. PROBLEM/BUG ONE: First of all: The customer has 1.000 SSL-hosts on the nginx-Server, so he wants to have 1000 listeners on TCP-Ports. But the cache_manager isn't able to open so many listeners. He's crashing after 512 open listeners. It looks very much like the cache_manager doesn't read the worker_connections setting from nginx.conf. We configured: worker_connections 10000; there, but the cache_manager crashes with 2012/11/09 17:53:11 [alert] 9345#0: 512 worker_connections are not enough 2012/11/09 17:53:12 [alert] 9330#0: cache manager process 9344 exited with fatal code 2 and cannot be respawned I did some testing: Having 505 SSL-hosts on the Server (=505 listener sockets) everything's working fine, but 515 listener sockets aren't possible. It's easy to reproduce: Just define 515 ssl-domains having different TCP-ports for every domain. :-) Looks like nobody had the idea before, that "somebody" (TM) could run more then 2 times /24-network-IPs on one single host. In fact, this does not happen in normal life... But for historical reasons (TM) our customer uses ONE ip-address and several TCP-Ports for that so he doesn't have a problem running so many differend SSL-hosts on one system -- and this is the special situation where we can see the bug (?), that the cache_manager ignores the worker_connection-setting (?), when he tries to open all the listeners and relating cache-files/sockets. So: Looks like a bug? Who can help? We need help... PROBLEM/BUG TWO: Having 16 workers for 1000 ssl-domains with 1000 listeners, we can see 16 * 1000 open TCP-listeners on that system, because every worker open it's own listeners (?). When we reach the magical barrier of 16386 open listeners (lsof -i | grep -c nginx), nginx is running into some kind of file limitations: 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning "worker process" (24: Too many open files) 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning "cache manager process" (24: Too many open files) 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning "cache loader process" (24: Too many open files) It's very easy to see, that the limitation is based on 16.386 open files and sockets from nginx. But I can't find the place, where this limitation comes from. "ulimit -n" is set to 100.000, everything's looking fine and should work with many more open files then just 16K. Could it be, that "nobody" (TM) expected, that "somebody" (TM) runs more then 1000 ssl-hosts with different TCP-ports on 16 worker-instances and that there's some kind of SMALL-INT-problem in the nginx code? Could it be, that this isn't a limitation from the linux system, but from some kind of too small address-space for that in nginx? So: Looks like a bug? Who can help? We need help... Peer -- Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-42 Fax: 030 / 405051-19 Zwangsangaben lt. ?35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin From andrew at nginx.com Fri Nov 9 20:06:14 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Sat, 10 Nov 2012 00:06:14 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D5B3C.4010204@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> <509D5B3C.4010204@heinlein-support.de> Message-ID: Hi, On Nov 9, 2012, at 23:36, Peer Heinlein wrote: > Am 09.11.2012 19:33, schrieb Isaac Hailperin: > > > > I did several hours of testing today with Isaac and there are two problems. > > PROBLEM/BUG ONE: > > First of all: The customer has 1.000 SSL-hosts on the nginx-Server, so > he wants to have 1000 listeners on TCP-Ports. But the cache_manager > isn't able to open so many listeners. He's crashing after 512 open > listeners. It looks very much like the cache_manager doesn't read the > worker_connections setting from nginx.conf. > > We configured: > > worker_connections 10000; > > there, but the cache_manager crashes with > > 2012/11/09 17:53:11 [alert] 9345#0: 512 worker_connections are not enough > 2012/11/09 17:53:12 [alert] 9330#0: cache manager process 9344 exited > with fatal code 2 and cannot be respawned > > > I did some testing: Having 505 SSL-hosts on the Server (=505 listener > sockets) everything's working fine, but 515 listener sockets aren't > possible. > > It's easy to reproduce: Just define 515 ssl-domains having different > TCP-ports for every domain. :-) > > Looks like nobody had the idea before, that "somebody" (TM) could run > more then 2 times /24-network-IPs on one single host. In fact, this does > not happen in normal life... > > But for historical reasons (TM) our customer uses ONE ip-address and > several TCP-Ports for that so he doesn't have a problem running so many > differend SSL-hosts on one system -- and this is the special situation > where we can see the bug (?), that the cache_manager ignores the > worker_connection-setting (?), when he tries to open all the listeners > and relating cache-files/sockets. > > So: Looks like a bug? Who can help? We need help... > > > PROBLEM/BUG TWO: > > Having 16 workers for 1000 ssl-domains with 1000 listeners, we can see > 16 * 1000 open TCP-listeners on that system, because every worker open > it's own listeners (?). When we reach the magical barrier of 16386 open > listeners (lsof -i | grep -c nginx), nginx is running into some kind of > file limitations: > > 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning > "worker process" (24: Too many open files) > 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning > "cache manager process" (24: Too many open files) > 2012/11/09 20:32:05 [alert] 9933#0: socketpair() failed while spawning > "cache loader process" (24: Too many open files) > > It's very easy to see, that the limitation is based on 16.386 open files > and sockets from nginx. > > But I can't find the place, where this limitation comes from. "ulimit > -n" is set to 100.000, everything's looking fine and should work with > many more open files then just 16K. > > Could it be, that "nobody" (TM) expected, that "somebody" (TM) runs more > then 1000 ssl-hosts with different TCP-ports on 16 worker-instances and > that there's some kind of SMALL-INT-problem in the nginx code? Could it > be, that this isn't a limitation from the linux system, but from some > kind of too small address-space for that in nginx? > > So: Looks like a bug? Who can help? We need help... > Peer > > > -- > Heinlein Support GmbH Are you looking for a commercial support option to back up your customer's contract with an underpinning contract and vendor support? I that's the case we've got our support options described here: http://nginx.com/support.html Hope this helps > Schwedter Str. 8/9b, 10119 Berlin > > http://www.heinlein-support.de > > Tel: 030 / 405051-42 > Fax: 030 / 405051-19 > > Zwangsangaben lt. ?35a GmbHG: > HRB 93818 B / Amtsgericht Berlin-Charlottenburg, > Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From p.heinlein at heinlein-support.de Fri Nov 9 20:15:13 2012 From: p.heinlein at heinlein-support.de (Peer Heinlein) Date: Fri, 09 Nov 2012 21:15:13 +0100 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> <509D5B3C.4010204@heinlein-support.de> Message-ID: <509D6451.5080207@heinlein-support.de> Am 09.11.2012 21:06, schrieb Andrew Alexeev: > Are you looking for a commercial support option to back up your customer's contract with an underpinning contract and vendor support? First of all I'm reporting some severe bugs in nginx. nginx should be interested in that and we *really* spent a lof of time for debugging and analyzing this (and, this many time has NOT been paid). And: I've already been on the commercial support page but there was no "by call"-support. I'm not interested in 12-month-contracts to solve one single problem. I do ** NOT ** have a problem paying somebody to fix that. I would have been happy the last few days having somebody else familiar with nginx debugging and fixing that. Unfortunetely there was no "by call"-Support (or I haven't found that). Feel free to send me offlist an offer about fixing this bug ASAP. I'd appreciate this! Peer -- Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-42 Fax: 030 / 405051-19 Zwangsangaben lt. ?35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin From andrew at nginx.com Fri Nov 9 20:37:28 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Sat, 10 Nov 2012 00:37:28 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D6451.5080207@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> <509D5B3C.4010204@heinlein-support.de> <509D6451.5080207@heinlein-support.de> Message-ID: <4F8D36A7-122E-4501-9940-7F373AED5421@nginx.com> On Nov 10, 2012, at 0:15, Peer Heinlein wrote: > Am 09.11.2012 21:06, schrieb Andrew Alexeev: > > >> Are you looking for a commercial support option to back up your customer's contract with an underpinning contract and vendor support? > > First of all I'm reporting some severe bugs in nginx. nginx should be > interested in that and we *really* spent a lof of time for debugging and > analyzing this (and, this many time has NOT been paid). Thanks much. What about also filling out a bug report in trac please? We'd definitely look more into that one and fix it during our normal dev cycle for 1.3.x. > And: > > I've already been on the commercial support page but there was no "by > call"-support. I'm not interested in 12-month-contracts to solve one > single problem. Got it. > I do ** NOT ** have a problem paying somebody to fix that. I would have > been happy the last few days having somebody else familiar with nginx > debugging and fixing that. > > Unfortunetely there was no "by call"-Support (or I haven't found that). I'm glad you like what you're doing for a living. Appreciate your efforts debugging nginx too. We fix a lot of things and often - check the changelogs. We don't have enough resources to fix everything ASAP though. If you've got certain commercial commitments, so do we. There are different options on http://nginx.com/support.html including an option to do custom inquiry. > Feel free to send me offlist an offer about fixing this bug ASAP. > I'd appreciate this! > > Peer > > -- > Heinlein Support GmbH > Schwedter Str. 8/9b, 10119 Berlin > > http://www.heinlein-support.de > > Tel: 030 / 405051-42 > Fax: 030 / 405051-19 > > Zwangsangaben lt. ?35a GmbHG: > HRB 93818 B / Amtsgericht Berlin-Charlottenburg, > Gesch?ftsf?hrer: Peer Heinlein -- Sitz: Berlin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From julyclyde at gmail.com Sat Nov 10 12:14:38 2012 From: julyclyde at gmail.com (=?GB2312?B?yM7P/sDa?=) Date: Sat, 10 Nov 2012 20:14:38 +0800 Subject: Internal 503 Redirect Issues? In-Reply-To: <3649ec49eef97d1a713617c1445b2518.NginxMailingListEnglish@forum.nginx.org> References: <3649ec49eef97d1a713617c1445b2518.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79FAC8AE-221E-405F-A6EF-1BEB637E05F9@gmail.com> the URL of css -- ??????????????????? from iPad2 3G On 2012?11?10?, at ??2:26, "mevans336" wrote: > Hello Everyone, > > I am attempting to configure an internal redirect for any 502/503 errors in > the event our backend servers are down. Here is the relevant part of my > configuration: > > location / { > add_header X-Frame-Options SAMEORIGIN; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_next_upstream error timeout invalid_header; > proxy_intercept_errors on; > error_page 502 503 504 @errors; > proxy_pass http://jboss_dev_servers; > > } > > location @errors { > root /usr/share/nginx/html; > try_files $uri /50x.html =503; > } > } > > If I disable JBoss on my backend server, the 500 status code is intercepted > and the 50x.html page is displayed. However, I am only receiving a partial > page render. It's almost like part of my .css file is not being sent to the > browser, as I get the text and images, but none of my CSS formatting. > > Here is my /usr/share/nginx/html directory structure, all users have read > access to the directories and files: > > -rw-r--r-- 1 root root 8035 2012-11-09 13:06 50x.html > drwxr-xr-x 2 root root 4096 2012-11-09 12:21 images > drwxr-xr-x 2 root root 4096 2012-11-09 12:21 js > drwxr-xr-x 2 root root 4096 2012-11-09 12:21 styles > > Here is a screenshot of how the page renders: > https://dl.dropbox.com/u/1540472/oops.png > > I think my @error location is configured incorrectly, but I've been beating > my head against a wall for an hour or so. Any ideas? > > Thanks, > > Matt > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232655,232655#msg-232655 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Nov 11 19:14:21 2012 From: nginx-forum at nginx.us (MarkA) Date: Sun, 11 Nov 2012 14:14:21 -0500 Subject: nginx reload vs restart Message-ID: <8c0feee0a16b201b33c625a5902b347f.NginxMailingListEnglish@forum.nginx.org> Hello all I am new to nginx and wanted to confirm if my understanding is correct. To retart nginx like any other service would cause downtime. If I change nginx configurations I can do a reload to cause the new configurations to take effect without any downtime to my site is that correct? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232706,232706#msg-232706 From david at styleflare.com Sun Nov 11 19:26:32 2012 From: david at styleflare.com (David J) Date: Sun, 11 Nov 2012 14:26:32 -0500 Subject: nginx reload vs restart In-Reply-To: <8c0feee0a16b201b33c625a5902b347f.NginxMailingListEnglish@forum.nginx.org> References: <8c0feee0a16b201b33c625a5902b347f.NginxMailingListEnglish@forum.nginx.org> Message-ID: You can send a kill with a HUP to reload the configs its documented on the nginx wiki On Nov 11, 2012 2:14 PM, "MarkA" wrote: > Hello all I am new to nginx and wanted to confirm if my understanding is > correct. To retart nginx like any other service would cause downtime. If I > change nginx configurations I can do a reload to cause the new > configurations to take effect without any downtime to my site is that > correct? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,232706,232706#msg-232706 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Nov 11 19:47:01 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 11 Nov 2012 23:47:01 +0400 Subject: nginx reload vs restart In-Reply-To: <8c0feee0a16b201b33c625a5902b347f.NginxMailingListEnglish@forum.nginx.org> References: <8c0feee0a16b201b33c625a5902b347f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121111194701.GN40452@mdounin.ru> Hello! On Sun, Nov 11, 2012 at 02:14:21PM -0500, MarkA wrote: > Hello all I am new to nginx and wanted to confirm if my understanding is > correct. To retart nginx like any other service would cause downtime. If I > change nginx configurations I can do a reload to cause the new > configurations to take effect without any downtime to my site is that > correct? Yes. More details about controlling running nginx may be found here: http://nginx.org/en/docs/control.html This, in addition to configuration reload, includes nginx executable upgrade without downtime. -- Maxim Dounin http://nginx.com/support.html From aribe.hernandez at gmail.com Sun Nov 11 22:57:25 2012 From: aribe.hernandez at gmail.com (Aribe Hernandez) Date: Sun, 11 Nov 2012 23:57:25 +0100 Subject: Nginx, the future of SPDY and End-of-Life for SPDY/2 Message-ID: Hi, The SPDY/2 patch at http://nginx.org/patches/spdy/ hasn't seen any development in the last three months despite a number of TODO and XXX marks in the code. >From nginx-dev@ talk I've understood the patch in considered stable - which is in line with my own experience after having run a number of Nginx servers with the patch for the last few months. It does indeed seem solid. (Great work btw, and big ups to Automattic for sponsoring the implementation!) Back in August there was some talk on the spdy-dev Google Group about when to EOL SPDY/2 and it was suggested that Google would drop SPDY/2 from Chrome 23 in early November. Chrome 23 has since been released and fortunately it still supports SPDY/2. https://groups.google.com/forum/#!msg/spdy-dev/zvA6Ohqs9Ew/8kkBLYMniQoJ https://groups.google.com/forum/?fromgroups=#!topic/spdy-dev/A0sCEnZBEcs The SPDY/3 spec was published as an IETF draft in February, 2012 and since then support for SPDY/3 has shown up in major browsers (July, 2012). Among non-browser software, Jetty (Java app server), HAproxy (load balancing proxy) and the mod_spdy module for Apache all features support for SPDY/3. Work is currently underway on developing the next iteration of the spec, SPDY/4. Nginx is worryingly missing from the list of software supporting SPDY/3 (as well as general SPDY discussions). Was the SPDY/2 thing for Nginx just a one shot thing or are there actual plans for the future of SPDY in Nginx? -- A. Hnz From nginx-forum at nginx.us Mon Nov 12 01:57:55 2012 From: nginx-forum at nginx.us (arnoldguo) Date: Sun, 11 Nov 2012 20:57:55 -0500 Subject: How many connections can nginx handle? In-Reply-To: <509CD11E.4050704@nginx.com> References: <509CD11E.4050704@nginx.com> Message-ID: Maxim Konovalov Wrote: ------------------------------------------------------- > On 11/9/12 7:46 AM, arnoldguo wrote: > > I have a server running proxy_pass the http request(mainly > GET,html,jpg > > etc.) to backend server(20+ server running nginx cache). > > this server with 32 core CPU,64G RAM,2X 10G NIC,RHEL 6.3,backend > server with > > 1000M NIC. > > When the load raise to 300k connections,3Gbps network traffic,new > connection > > can not be serviced,cannot connect to nginx's port,but the system > load is > > low(~1.00),and has many free memory, > > no errorlog in /var/log/message,has some nginx errorlog > like"upstream timed > > out to backend server", > > but the backend's service is normal,load is not hight... > > i have tune tcp config,can handle 2000K conn,it seems that nginx > cannot > > handle so much connection? > > > How did you do that and how did you check that the OS can handle 2M > tcp connections? > I use a 3rd nginx PUSH module,and can get 2M nginx connections... > > nginx config: > > worker_processes 32;use worker_cpu_affinity; > > keepalive 32 to backend servers, > > proxy_buffering on or off has the same result... > > > It's almost always about tuning various OS limits not nginx. > > As for nginx, this link is a good start: > > http://nginx.org/en/docs/ngx_core_module.html#worker_connections > > -- > Maxim Konovalov > +7 (910) 4293178 > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232623,232712#msg-232712 From vbart at nginx.com Mon Nov 12 03:01:16 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 12 Nov 2012 07:01:16 +0400 Subject: How many connections can nginx handle? In-Reply-To: References: Message-ID: <201211120701.16267.vbart@nginx.com> On Friday 09 November 2012 07:46:50 arnoldguo wrote: > [...] > nginx config: > worker_processes 32;use worker_cpu_affinity; > keepalive 32 to backend servers, > proxy_buffering on or off has the same result... http://nginx.org/r/worker_connections wbr, Valentin V. Bartenev From agentzh at gmail.com Mon Nov 12 04:40:03 2012 From: agentzh at gmail.com (agentzh) Date: Sun, 11 Nov 2012 20:40:03 -0800 Subject: [ANN] ngx_openresty devel version 1.2.4.7 released In-Reply-To: References: Message-ID: Hello! I am happy to announce the new development version of ngx_openresty, 1.2.4.7: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.4.5: * upgraded LuaJIT to 2.0.0rc3. * upgraded LuaNginxModule to 0.7.4. * feature: added new directive lua_check_client_abort (default to "off") for monitoring and processing the event that the client closes the (downstream) connection prematurely. thanks Zhu Dejiang for request this feature. * feature: added new Lua API ngx.on_abort() which is used to register user Lua function callback for the event that the client closes the (downstream) connection prematurely. thanks Matthieu Tourne for suggesting this feature. * feature: ngx.exit(N) can now abort pending subrequests when "N = 408" (request time out) or "N = 499" (client closed request) or "N = -1" (error). * bugfix: The TCP/stream cosocket's connect() method could not detect errors like "connection refused" when kqueue was used (on FreeBSD or Mac OS X, for example). thanks smallfish for reporting this issue. * bugfix: reading operations on ngx.req.socket() did not return any errors when the request body got truncated; now we return the "client aborted" error. * upgraded LuaRestyDNSLibrary to 0.09. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * bugfix: a debugging output might be sent to stdout unexpectedly in some code path. * upgraded LuaRestyMemcachedLibrary to 0.10. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * docs: fixed typos in README. thanks cyberty for the patch. * upgraded LuaRestyRedisLibrary to 0.15. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * optimize: avoided using "ipairs()" which is slower than plain "for i=1,N" loops. * upgraded LuaRestyMySQLLibrary to 0.11. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * feature: now the new() method will return a string describing the error as the second return value in case of failures. * upgraded LuaRestyUploadLibrary to 0.04. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * upgraded LuaRestyStringLibrary to 0.07. * refactor: avoided using "package.seeall" in Lua module definitions, which improves performance and also prevents subtle bad side-effects. * docs: typo-fixes in the code samples from Bearnard Hibbins. * bugfix: Nginx upstream modules could not detect the "connection refused" error in time if kqueue was used; now we apply the upstream_test_connect_kqueue patch for the Nginx core. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002004 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From nginx-forum at nginx.us Mon Nov 12 08:30:49 2012 From: nginx-forum at nginx.us (arnoldguo) Date: Mon, 12 Nov 2012 03:30:49 -0500 Subject: How many connections can nginx handle? In-Reply-To: <201211120701.16267.vbart@nginx.com> References: <201211120701.16267.vbart@nginx.com> Message-ID: worker_connections has been set to 1048576; As a reverse proxy situation, max clients becomes max clients = worker_processes * worker_connections/4 should handle 8 million connections?but on my server limit to 300k conn,what's wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232623,232718#msg-232718 From nginx-forum at nginx.us Mon Nov 12 09:03:49 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 12 Nov 2012 04:03:49 -0500 Subject: valid_referers directive not working correctly Message-ID: <08f897a740a60a52329b8e0aaa272d4c.NginxMailingListEnglish@forum.nginx.org> I am trying to block all requests which do not come from my own server. A quick read of the nginx wiki led me to the valid_referers directive. I implemented it like: server { listen 80; server_name ~^(?.+)\.my-domain\.io$; root /srv/www/accounts/$account/app; index index.php; access_log /var/log/nginx/accounts/$account/access.log; error_log /var/log/nginx/accounts/error.log; include /etc/nginx/excludes.conf; include /etc/nginx/expires.conf; location / { valid_referers server_names not-my-domain.com; if ($invalid_referer) { return 403; } location ~\.php { try_files $uri =404; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:3001; include /etc/nginx/fastcgi_params; fastcgi_param MY_DOMAIN_ACCOUNT $account; } } I purposefully put not-my-domain.com instead of my-domain.com to make sure a 403 status code was returned. Unfortunately, it is not. I wrote a simple html file with an iframe that grabs a php page from the server from a different domain. This should be returning a 403 code, but it works. Any ideas? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232722,232722#msg-232722 From ne at vbart.ru Mon Nov 12 09:13:38 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Mon, 12 Nov 2012 13:13:38 +0400 Subject: valid_referers directive not working correctly In-Reply-To: <08f897a740a60a52329b8e0aaa272d4c.NginxMailingListEnglish@forum.nginx.org> References: <08f897a740a60a52329b8e0aaa272d4c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211121313.38930.ne@vbart.ru> On Monday 12 November 2012 13:03:49 justin wrote: > I am trying to block all requests which do not come from my own server. A > quick read of the nginx wiki led me to the valid_referers directive. I > implemented it like: > > server { > listen 80; > > server_name ~^(?.+)\.my-domain\.io$; > > root /srv/www/accounts/$account/app; > > index index.php; > > access_log /var/log/nginx/accounts/$account/access.log; > error_log /var/log/nginx/accounts/error.log; > > include /etc/nginx/excludes.conf; > include /etc/nginx/expires.conf; > > location / { > valid_referers server_names not-my-domain.com; > if ($invalid_referer) { > return 403; > } > > location ~\.php { > try_files $uri =404; > fastcgi_index index.php; > fastcgi_intercept_errors on; > fastcgi_pass 127.0.0.1:3001; > include /etc/nginx/fastcgi_params; > fastcgi_param MY_DOMAIN_ACCOUNT $account; > } > } > > I purposefully put not-my-domain.com instead of my-domain.com to make sure > a 403 status code was returned. Unfortunately, it is not. I wrote a simple > html file with an iframe that grabs a php page from the server from a > different domain. This should be returning a 403 code, but it works. > > Any ideas? Thanks. > Your request to php page is processed in "location ~\.php" which do not have any referrer constraints. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Nov 12 09:19:23 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 12 Nov 2012 04:19:23 -0500 Subject: valid_referers directive not working correctly In-Reply-To: <201211121313.38930.ne@vbart.ru> References: <201211121313.38930.ne@vbart.ru> Message-ID: <6c61942b31063d4263d553bf2542e40c.NginxMailingListEnglish@forum.nginx.org> Ahh right, so basically I have to copy: valid_referers server_names not-my-domain.com; if ($invalid_referer) { return 403; } Into the php match block. Is there a way to do this without having the same exact code copied into both location blocks? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232722,232724#msg-232724 From igor at sysoev.ru Mon Nov 12 09:26:17 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 12 Nov 2012 13:26:17 +0400 Subject: valid_referers directive not working correctly In-Reply-To: <6c61942b31063d4263d553bf2542e40c.NginxMailingListEnglish@forum.nginx.org> References: <201211121313.38930.ne@vbart.ru> <6c61942b31063d4263d553bf2542e40c.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Nov 12, 2012, at 13:19 , justin wrote: > Ahh right, so basically I have to copy: > > valid_referers server_names not-my-domain.com; > if ($invalid_referer) { > return 403; > } > > Into the php match block. Is there a way to do this without having the same > exact code copied into both location blocks? You have to copy only if ($invalid_referer) { return 403; } The issue is that while the most nginx directive are declarative and and can be easy inherted, the "if", "rewrite", "set", and "return" are imperative directives. -- Igor Sysoev http://nginx.com/support.html From vbart at nginx.com Mon Nov 12 09:31:45 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 12 Nov 2012 13:31:45 +0400 Subject: valid_referers directive not working correctly In-Reply-To: <6c61942b31063d4263d553bf2542e40c.NginxMailingListEnglish@forum.nginx.org> References: <201211121313.38930.ne@vbart.ru> <6c61942b31063d4263d553bf2542e40c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211121331.45531.vbart@nginx.com> On Monday 12 November 2012 13:19:23 justin wrote: > Ahh right, so basically I have to copy: > > valid_referers server_names not-my-domain.com; > if ($invalid_referer) { > return 403; > } > > Into the php match block. Is there a way to do this without having the same > exact code copied into both location blocks? > You can put it on the server level. Basically there are two types of rewrite rules: - "server" level - "location" specific For details, see: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Nov 12 09:46:49 2012 From: nginx-forum at nginx.us (justin) Date: Mon, 12 Nov 2012 04:46:49 -0500 Subject: valid_referers directive not working correctly In-Reply-To: <201211121331.45531.vbart@nginx.com> References: <201211121331.45531.vbart@nginx.com> Message-ID: <0c1105d0dce7ce3ba45d9ce7e977e9be.NginxMailingListEnglish@forum.nginx.org> Very strange, moving the directive into the server block, now blocks everything, including requests from my own server. valid_referers server_names *.my-domain.io; if ($invalid_referer) { return 403; } Strange that when the code was copied twice into the / location bllock and php match block it worked. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232722,232727#msg-232727 From nginx-forum at nginx.us Mon Nov 12 11:39:05 2012 From: nginx-forum at nginx.us (piyushbj) Date: Mon, 12 Nov 2012 06:39:05 -0500 Subject: nginx not showing client ip In-Reply-To: References: Message-ID: <7b4a33f3809d090a32e5081b333ef537.NginxMailingListEnglish@forum.nginx.org> Dear Jonathan, Thanks for your reply. Now my problem has been resolved with the help of below URL :- http://blogs.iis.net/anilr/archive/2009/03/03/client-ip-not-logged-on-content-server-when-using-arr.aspx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232641,232730#msg-232730 From vbart at nginx.com Mon Nov 12 14:47:46 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 12 Nov 2012 18:47:46 +0400 Subject: Nginx, the future of SPDY and End-of-Life for SPDY/2 In-Reply-To: References: Message-ID: <201211121847.46152.vbart@nginx.com> On Monday 12 November 2012 02:57:25 Aribe Hernandez wrote: [...] > Back in August there was some talk on the spdy-dev Google Group about > when to EOL SPDY/2 and it was suggested that Google would drop SPDY/2 > from Chrome 23 in early November. Chrome 23 has since been released > and fortunately it still supports SPDY/2. > https://groups.google.com/forum/#!msg/spdy-dev/zvA6Ohqs9Ew/8kkBLYMniQoJ > https://groups.google.com/forum/?fromgroups=#!topic/spdy-dev/A0sCEnZBEcs Don't worry. If you read carefully the discussion referenced by the last link, you will find that "End-of-Life for SPDY/2" postponed to SPDY/4 plus some time. > The SPDY/3 spec was published as an IETF draft in February, 2012 and > since then support for SPDY/3 has shown up in major browsers (July, > 2012). Among non-browser software, Jetty (Java app server), HAproxy > (load balancing proxy) and the mod_spdy module for Apache all features > support for SPDY/3. Yes. And SPDY/3 has no noticeable improvements compared to SPDY/2, but instead it has some problems. See: http://japhr.blogspot.ru/2012/05/spdy3-flow-control-comparisons.html https://groups.google.com/d/msg/spdy-dev/JB_aQPNI7rw/10UFCLfeCxgJ > Work is currently underway on developing the next iteration of the spec, > SPDY/4. > > Nginx is worryingly missing from the list of software supporting > SPDY/3 (as well as general SPDY discussions). Was the SPDY/2 thing for > Nginx just a one shot thing or are there actual plans for the future > of SPDY in Nginx? The current plan is to integrate spdy implementation into the nginx code base as painless as possible in terms of code reuse and follow-up support. Work is going on, but that is not related to the patch. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From howachen at gmail.com Mon Nov 12 15:29:28 2012 From: howachen at gmail.com (howard chen) Date: Mon, 12 Nov 2012 23:29:28 +0800 Subject: having lot of waiting connection will cause high CPU usage? Message-ID: Hi, In one of my server, the nginx status return the following Active connections: 3595 server accepts handled requests 95329528 95329528 118118629 Reading: 334 Writing: 49 Waiting: 3212 There are a lot of "Waiting" connection, and the CPU load average is around 2, I am using the default Keep alive timeout of 65s. I am thinking what should be done in order to reduce the load average. (I am using SSL for the connection) Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Nov 12 18:26:43 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 12 Nov 2012 22:26:43 +0400 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: Message-ID: <201211122226.43165.vbart@nginx.com> On Friday 09 November 2012 22:08:47 CM Fields wrote: > [...] > I just wanted to report this issue in case someone else had the same > problem. I wish I had more information, but at this time I am not sure what > the client is sending to cause the hanging open sockets. If there is any > other information that will help or if a new patch needs testing please > tell me. > > Have a great weekend! Hello, thank you for the report. Could you please test the new revision of spdy patch: http://nginx.org/patches/spdy/patch.spdy-53.txt ? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nginx-forum at nginx.us Mon Nov 12 20:02:55 2012 From: nginx-forum at nginx.us (Infinitnet) Date: Mon, 12 Nov 2012 15:02:55 -0500 Subject: GeoIP country blocking - whitelist specific IPs Message-ID: <7f941fd35022613ad65470560a69b250.NginxMailingListEnglish@forum.nginx.org> Hello NGINX users, I'm facing a little issue with country bans over GeoIP. I'm using the following code within my server directive: if ($geoip_country_code ~ (BR|CN|KR|RU) ) { return 123; } 123 returns an error page informing the visitor that his country is blocked. Now let's say I've got some visitors from russia, who should still be able to access my website. How would I archive this? Of course something like "allow 1.2.3.4;" doesn't work with the code above. Any suggestions? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232745,232745#msg-232745 From nginx-forum at nginx.us Tue Nov 13 05:55:31 2012 From: nginx-forum at nginx.us (Wouter van der Schagt) Date: Tue, 13 Nov 2012 00:55:31 -0500 Subject: Adding cachekey to log_format directive Message-ID: <16655e3cf88aca050a05ec8f407c421d.NginxMailingListEnglish@forum.nginx.org> Good morning, I'm trying to add the generated cache key of a proxied request to a log_format directive. From what I can tell, this variable is not normally available when logging requests so I've to modify the proxy module in ngx_http_proxy_module.c. So far I've added "cachekey" to the typedef struct: ngx_http_proxy_vars_t so that ngx_http_proxy_set_var can set it. I've added the declaration: static ngx_int_t ngx_http_proxy_cachekey_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); and in the corresponding function (which is basically a copy of ngx_http_proxy_host_variable), I do: v->len = ctx->vars.cachekey.len; v->data = ctx->vars.cachekey.data; I've also added: { ngx_string("proxy_cachekey"), NULL, ngx_http_proxy_cachekey_variable, 0, NGX_HTTP_VAR_CHANGEABLE|NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, to ngx_http_proxy_vars[] so that proxy_cachekey is available for use in the log_format directive. Finally in; ngx_http_proxy_set_var(ngx_url_t *u, ngx_http_proxy_vars_t *v) I'm guessing I've to set the key somewhere here: v->cachekey.len = ??; v->cachekey.data = ??; The question I have is, how do I get the generated cache key in the v.cachekey.data field? If I populate these variables with arbitrary data I can see its being logged correctly so am confident I'm in the right place. Any suggestion would be appreciated, Sincerely, - WS Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232747,232747#msg-232747 From nilshar at gmail.com Tue Nov 13 08:32:54 2012 From: nilshar at gmail.com (Nilshar) Date: Tue, 13 Nov 2012 09:32:54 +0100 Subject: Problem with filename encoding Message-ID: Hello list, I got an issue with a filename containing "strange" characters. It seems that nginx is not able to url_decode correctly, and then get the right file. Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" but apache is able to read it, and nginx is not : nginx strace : open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) apache strace : open("//images/Capture d\342\200\231\303\251cran 2010-09-25 \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 So it seems that nginx is using the url_encoded version of the filename, while apache do it's own thing on it. On both apache and nginx, the access log says : "GET /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" both server have the same locales settings, and I tried different charset configuration into nginx, but no luck.. Any idea how I can fix that without changing the filename (sadly, it's not possible :/) ? Thanks Nilshar -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Nov 13 08:38:33 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 13 Nov 2012 12:38:33 +0400 Subject: Problem with filename encoding In-Reply-To: References: Message-ID: On Nov 13, 2012, at 12:32 , Nilshar wrote: > Hello list, > > I got an issue with a filename containing "strange" characters. > It seems that nginx is not able to url_decode correctly, and then get the right file. > > Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" > but apache is able to read it, and nginx is not : > > nginx strace : > open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) > > apache strace : > open("//images/Capture d\342\200\231\303\251cran 2010-09-25 \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 > > So it seems that nginx is using the url_encoded version of the filename, while apache do it's own thing on it. > > On both apache and nginx, the access log says : "GET /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" > > both server have the same locales settings, and I tried different charset configuration into nginx, but no luck.. > > Any idea how I can fix that without changing the filename (sadly, it's not possible :/) ? The most probably there is a rewrite in configuration which changes URI to $request_uri. nginx escapes URI if no one interferes. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nilshar at gmail.com Tue Nov 13 08:41:35 2012 From: nilshar at gmail.com (Nilshar) Date: Tue, 13 Nov 2012 09:41:35 +0100 Subject: Problem with filename encoding In-Reply-To: References: Message-ID: On 13 November 2012 09:38, Igor Sysoev wrote: > On Nov 13, 2012, at 12:32 , Nilshar wrote: > > Hello list, > > I got an issue with a filename containing "strange" characters. > It seems that nginx is not able to url_decode correctly, and then get the > right file. > > Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" > but apache is able to read it, and nginx is not : > > nginx strace : > open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", > O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) > > apache strace : > open("//images/Capture d\342\200\231\303\251cran 2010-09-25 \303\240 > 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 > > So it seems that nginx is using the url_encoded version of the filename, > while apache do it's own thing on it. > > On both apache and nginx, the access log says : "GET > /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" > > both server have the same locales settings, and I tried different charset > configuration into nginx, but no luck.. > > Any idea how I can fix that without changing the filename (sadly, it's not > possible :/) ? > > > The most probably there is a rewrite in configuration which changes URI to > $request_uri. > nginx escapes URI if no one interferes. > > > -- > Igor Sysoev > http://nginx.com/support.html > > Yes indeed, there is a rewrite ! got a tip on how to fix that ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Nov 13 09:01:03 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 13 Nov 2012 13:01:03 +0400 Subject: Problem with filename encoding In-Reply-To: References: Message-ID: <7052C03E-6549-486C-9ABF-1B9A041D5A0E@sysoev.ru> On Nov 13, 2012, at 12:41 , Nilshar wrote: > On 13 November 2012 09:38, Igor Sysoev wrote: > On Nov 13, 2012, at 12:32 , Nilshar wrote: > >> Hello list, >> >> I got an issue with a filename containing "strange" characters. >> It seems that nginx is not able to url_decode correctly, and then get the right file. >> >> Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" >> but apache is able to read it, and nginx is not : >> >> nginx strace : >> open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) >> >> apache strace : >> open("//images/Capture d\342\200\231\303\251cran 2010-09-25 \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 >> >> So it seems that nginx is using the url_encoded version of the filename, while apache do it's own thing on it. >> >> On both apache and nginx, the access log says : "GET /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" >> >> both server have the same locales settings, and I tried different charset configuration into nginx, but no luck.. >> >> Any idea how I can fix that without changing the filename (sadly, it's not possible :/) ? > > > The most probably there is a rewrite in configuration which changes URI to $request_uri. > nginx escapes URI if no one interferes. > > > Yes indeed, there is a rewrite ! > got a tip on how to fix that ? The general rule is to not use rewrites at all: they make configuration to a mess. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From nilshar at gmail.com Tue Nov 13 09:22:53 2012 From: nilshar at gmail.com (Nilshar) Date: Tue, 13 Nov 2012 10:22:53 +0100 Subject: Problem with filename encoding In-Reply-To: <7052C03E-6549-486C-9ABF-1B9A041D5A0E@sysoev.ru> References: <7052C03E-6549-486C-9ABF-1B9A041D5A0E@sysoev.ru> Message-ID: On 13 November 2012 10:01, Igor Sysoev wrote: > On Nov 13, 2012, at 12:41 , Nilshar wrote: > > On 13 November 2012 09:38, Igor Sysoev wrote: > >> On Nov 13, 2012, at 12:32 , Nilshar wrote: >> >> Hello list, >> >> I got an issue with a filename containing "strange" characters. >> It seems that nginx is not able to url_decode correctly, and then get the >> right file. >> >> Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" >> but apache is able to read it, and nginx is not : >> >> nginx strace : >> open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", >> O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) >> >> apache strace : >> open("//images/Capture d\342\200\231\303\251cran 2010-09-25 >> \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 >> >> So it seems that nginx is using the url_encoded version of the filename, >> while apache do it's own thing on it. >> >> On both apache and nginx, the access log says : "GET >> /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" >> >> both server have the same locales settings, and I tried different charset >> configuration into nginx, but no luck.. >> >> Any idea how I can fix that without changing the filename (sadly, it's >> not possible :/) ? >> >> >> The most probably there is a rewrite in configuration which changes URI >> to $request_uri. >> nginx escapes URI if no one interferes. >> >> > Yes indeed, there is a rewrite ! > got a tip on how to fix that ? > > > The general rule is to not use rewrites at all: they make configuration to > a mess. > > > > -- > Igor Sysoev > http://nginx.com/support.html > Hum... well ok, if I read it right, it is recommanded to use try_files instead right ? Problem is that it seems that try_files is not allowed inside a "if"... Well.. maybe someone will be able to point me a better conf, here what I'm trying to do : location ~ ^/(media|files|list|album|images)/ { root /; if ($host ~* "^(.)(.)(.*)\.example.com$") { set $dir1 $1; set $dir2 $2; set $dir3 $3; rewrite ^ // www.example.com/$dir1/$dir2/$dir1$dir2$dir3$request_uri? break; } } So I should be able to remove the if so try_files might be happy, but I do not know how to set the 3 dir w/o the if statement.. I tried several things, so far no luck. Nilshar -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Nov 13 09:41:37 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 13 Nov 2012 13:41:37 +0400 Subject: Problem with filename encoding In-Reply-To: References: <7052C03E-6549-486C-9ABF-1B9A041D5A0E@sysoev.ru> Message-ID: <20121113094137.GA15702@nginx.com> On Tue, Nov 13, 2012 at 10:22:53AM +0100, Nilshar wrote: > On 13 November 2012 10:01, Igor Sysoev wrote: > > > On Nov 13, 2012, at 12:41 , Nilshar wrote: > > > > On 13 November 2012 09:38, Igor Sysoev wrote: > > > >> On Nov 13, 2012, at 12:32 , Nilshar wrote: > >> > >> Hello list, > >> > >> I got an issue with a filename containing "strange" characters. > >> It seems that nginx is not able to url_decode correctly, and then get the > >> right file. > >> > >> Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? 08.30.07.png" > >> but apache is able to read it, and nginx is not : > >> > >> nginx strace : > >> open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", > >> O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) > >> > >> apache strace : > >> open("//images/Capture d\342\200\231\303\251cran 2010-09-25 > >> \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 > >> > >> So it seems that nginx is using the url_encoded version of the filename, > >> while apache do it's own thing on it. > >> > >> On both apache and nginx, the access log says : "GET > >> /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" > >> > >> both server have the same locales settings, and I tried different charset > >> configuration into nginx, but no luck.. > >> > >> Any idea how I can fix that without changing the filename (sadly, it's > >> not possible :/) ? > >> > >> > >> The most probably there is a rewrite in configuration which changes URI > >> to $request_uri. > >> nginx escapes URI if no one interferes. > >> > >> > > Yes indeed, there is a rewrite ! > > got a tip on how to fix that ? > > > > > > The general rule is to not use rewrites at all: they make configuration to > > a mess. > > Hum... well ok, if I read it right, it is recommanded to use try_files > instead right ? > Problem is that it seems that try_files is not allowed inside a "if"... > > Well.. maybe someone will be able to point me a better conf, here what I'm > trying to do : > > location ~ ^/(media|files|list|album|images)/ { > root /; > if ($host ~* "^(.)(.)(.*)\.example.com$") { > set $dir1 $1; > set $dir2 $2; > set $dir3 $3; > rewrite ^ // > www.example.com/$dir1/$dir2/$dir1$dir2$dir3$request_uri? break; > } > } > > So I should be able to remove the if so try_files might be happy, but I do > not know how to set the 3 dir w/o the if statement.. > I tried several things, so far no luck. server { server_name ~^(?.)(?.)(?.*)\.example\.com$; root /path/www.example.com/$dir1/$dir2/$dir1$dir2$dir3; location /media/ { } location /files/ { } location /list/ { } location /album/ { } location /images/ { } location / { return 404; } } -- Igor Sysoev http://nginx.com/support.html From nilshar at gmail.com Tue Nov 13 09:48:37 2012 From: nilshar at gmail.com (Nilshar) Date: Tue, 13 Nov 2012 10:48:37 +0100 Subject: Problem with filename encoding In-Reply-To: <20121113094137.GA15702@nginx.com> References: <7052C03E-6549-486C-9ABF-1B9A041D5A0E@sysoev.ru> <20121113094137.GA15702@nginx.com> Message-ID: On 13 November 2012 10:41, Igor Sysoev wrote: > On Tue, Nov 13, 2012 at 10:22:53AM +0100, Nilshar wrote: > > On 13 November 2012 10:01, Igor Sysoev wrote: > > > > > On Nov 13, 2012, at 12:41 , Nilshar wrote: > > > > > > On 13 November 2012 09:38, Igor Sysoev wrote: > > > > > >> On Nov 13, 2012, at 12:32 , Nilshar wrote: > > >> > > >> Hello list, > > >> > > >> I got an issue with a filename containing "strange" characters. > > >> It seems that nginx is not able to url_decode correctly, and then get > the > > >> right file. > > >> > > >> Yes, the filename is ugly : "Capture d??cran 2010-09-25 ? > 08.30.07.png" > > >> but apache is able to read it, and nginx is not : > > >> > > >> nginx strace : > > >> > open("//images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png", > > >> O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) > > >> > > >> apache strace : > > >> open("//images/Capture d\342\200\231\303\251cran 2010-09-25 > > >> \303\240 08.30.07.png", O_RDONLY|O_CLOEXEC) = 100 > > >> > > >> So it seems that nginx is using the url_encoded version of the > filename, > > >> while apache do it's own thing on it. > > >> > > >> On both apache and nginx, the access log says : "GET > > >> > /images/Capture%20d%E2%80%99%C3%A9cran%202010-09-25%20%C3%A0%2008.30.07.png" > > >> > > >> both server have the same locales settings, and I tried different > charset > > >> configuration into nginx, but no luck.. > > >> > > >> Any idea how I can fix that without changing the filename (sadly, it's > > >> not possible :/) ? > > >> > > >> > > >> The most probably there is a rewrite in configuration which changes > URI > > >> to $request_uri. > > >> nginx escapes URI if no one interferes. > > >> > > >> > > > Yes indeed, there is a rewrite ! > > > got a tip on how to fix that ? > > > > > > > > > The general rule is to not use rewrites at all: they make > configuration to > > > a mess. > > > > Hum... well ok, if I read it right, it is recommanded to use try_files > > instead right ? > > Problem is that it seems that try_files is not allowed inside a "if"... > > > > Well.. maybe someone will be able to point me a better conf, here what > I'm > > trying to do : > > > > location ~ ^/(media|files|list|album|images)/ { > > root /; > > if ($host ~* "^(.)(.)(.*)\.example.com$") { > > set $dir1 $1; > > set $dir2 $2; > > set $dir3 $3; > > rewrite ^ // > > www.example.com/$dir1/$dir2/$dir1$dir2$dir3$request_uri? break; > > } > > } > > > > So I should be able to remove the if so try_files might be happy, but I > do > > not know how to set the 3 dir w/o the if statement.. > > I tried several things, so far no luck. > > server { > server_name ~^(?.)(?.)(?.*)\.example\.com$; > > root /path/www.example.com/$dir1/$dir2/$dir1$dir2$dir3; > > location /media/ { } > location /files/ { } > location /list/ { } > location /album/ { } > location /images/ { } > > location / { return 404; } > } > > > -- > Igor Sysoev > http://nginx.com/support.html > Well... simple, clear and... works perfectly ! very hard to switch from an apache logic to a nginx logic, but I'll improve :p Thanks a lot Igor ! Nilshar. -------------- next part -------------- An HTML attachment was scrubbed... URL: From laursen at oxygen.net Tue Nov 13 11:02:36 2012 From: laursen at oxygen.net (Lasse Laursen) Date: Tue, 13 Nov 2012 12:02:36 +0100 Subject: GeoIP country blocking - whitelist specific IPs In-Reply-To: <7f941fd35022613ad65470560a69b250.NginxMailingListEnglish@forum.nginx.org> References: <7f941fd35022613ad65470560a69b250.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8041017E-390B-4DB3-BF22-702270AE242E@oxygen.net> Hi We use something similar to this: geoip_country /path/to/GeoIP.dat; geo $allowed_ranges { default 0; 1.2.3.0/24 1; 10.0.0.0/8 1; 127.0.0.1 1; } map $geoip_country_code $blocked_country { default 1; A2 0; # Satellite Provider O1 0; # Other Country AD 1; # Andorra AP 1; # Asia/Pacific Region AQ 1; # Antarctica } if ($blocked_country) { set $deny_request 1; } if ($allowed_ranges) { set $deny_request 0; } if ($deny_request) { # Do whatever you want to do here ... } Hope that it makes sense? :) L. On Nov 12, 2012, at 9:02 PM, Infinitnet wrote: > Hello NGINX users, > > I'm facing a little issue with country bans over GeoIP. I'm using the > following code within my server directive: > > if ($geoip_country_code ~ (BR|CN|KR|RU) ) { > return 123; > } > > 123 returns an error page informing the visitor that his country is blocked. > Now let's say I've got some visitors from russia, who should still be able > to access my website. How would I archive this? Of course something like > "allow 1.2.3.4;" doesn't work with the code above. Any suggestions? > > Thanks in advance! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232745,232745#msg-232745 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Nov 13 11:21:10 2012 From: nginx-forum at nginx.us (Infinitnet) Date: Tue, 13 Nov 2012 06:21:10 -0500 Subject: GeoIP country blocking - whitelist specific IPs In-Reply-To: <8041017E-390B-4DB3-BF22-702270AE242E@oxygen.net> References: <8041017E-390B-4DB3-BF22-702270AE242E@oxygen.net> Message-ID: Hello, thanks for your reply! Your solution does indeed make sense and I've been using something similar before. Just thought there might be something that wouldn't require rewriting my current syntax, such as: if ($geoip_country_code ~ (BR|CN|KR|RU) ) { if ($remote_addr = (1.2.3.4|1.2.3.5|1.2.3.6) ) { break; } return 123; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232745,232762#msg-232762 From nginx-forum at nginx.us Tue Nov 13 11:23:50 2012 From: nginx-forum at nginx.us (Infinitnet) Date: Tue, 13 Nov 2012 06:23:50 -0500 Subject: GeoIP country blocking - whitelist specific IPs In-Reply-To: References: <8041017E-390B-4DB3-BF22-702270AE242E@oxygen.net> Message-ID: <6a336608a5c8c84114812ac32298614c.NginxMailingListEnglish@forum.nginx.org> ...or some elif function, but you get the idea. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232745,232764#msg-232764 From mdounin at mdounin.ru Tue Nov 13 13:57:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Nov 2012 17:57:35 +0400 Subject: nginx-1.2.5 Message-ID: <20121113135735.GH40452@mdounin.ru> Changes with nginx 1.2.5 13 Nov 2012 *) Feature: the "optional_no_ca" parameter of the "ssl_verify_client" directive. Thanks to Mike Kazantsev and Eric O'Connor. *) Feature: the $bytes_sent, $connection, and $connection_requests variables can now be used not only in the "log_format" directive. Thanks to Benjamin Gr?ssing. *) Feature: resolver now randomly rotates addresses returned from cache. Thanks to Anton Jouline. *) Feature: the "auto" parameter of the "worker_processes" directive. *) Bugfix: "cache file ... has md5 collision" alert. *) Bugfix: OpenSSL 0.9.7 compatibility. -- Maxim Dounin http://nginx.com/support.html From cmfileds at gmail.com Tue Nov 13 16:40:37 2012 From: cmfileds at gmail.com (CM Fields) Date: Tue, 13 Nov 2012 11:40:37 -0500 Subject: SPDY sockets staying open indefinitely In-Reply-To: <201211122226.43165.vbart@nginx.com> References: <201211122226.43165.vbart@nginx.com> Message-ID: Valentin, Thanks for the patch. I put the new code in place this morning. The server will need to run for a few days up to a week before I might see the possibility of a lingering socket from a bad client. I will report what I find. Thank you very much. On Mon, Nov 12, 2012 at 1:26 PM, Valentin V. Bartenev wrote: > On Friday 09 November 2012 22:08:47 CM Fields wrote: > > [...] > > I just wanted to report this issue in case someone else had the same > > problem. I wish I had more information, but at this time I am not sure > what > > the client is sending to cause the hanging open sockets. If there is any > > other information that will help or if a new patch needs testing please > > tell me. > > > > Have a great weekend! > > Hello, thank you for the report. > > Could you please test the new revision of spdy patch: > http://nginx.org/patches/spdy/patch.spdy-53.txt ? > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Samad at yieldbroker.com Tue Nov 13 19:34:03 2012 From: Alex.Samad at yieldbroker.com (Alex Samad - Yieldbroker) Date: Tue, 13 Nov 2012 19:34:03 +0000 Subject: nginx-1.2.5 In-Reply-To: <20121113135735.GH40452@mdounin.ru> References: <20121113135735.GH40452@mdounin.ru> Message-ID: Hi Have all the ssl features from the development branch been brought down ? Specifically the option of having a ca hint for client certs and a different chain of ca for verifying ? Alex > -----Original Message----- > From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On > Behalf Of Maxim Dounin > Sent: Wednesday, 14 November 2012 12:58 AM > To: nginx at nginx.org > Subject: nginx-1.2.5 > > Changes with nginx 1.2.5 13 Nov 2012 > > *) Feature: the "optional_no_ca" parameter of the "ssl_verify_client" > directive. > Thanks to Mike Kazantsev and Eric O'Connor. > > *) Feature: the $bytes_sent, $connection, and $connection_requests > variables can now be used not only in the "log_format" directive. > Thanks to Benjamin Gr?ssing. > > *) Feature: resolver now randomly rotates addresses returned from cache. > Thanks to Anton Jouline. > > *) Feature: the "auto" parameter of the "worker_processes" directive. > > *) Bugfix: "cache file ... has md5 collision" alert. > > *) Bugfix: OpenSSL 0.9.7 compatibility. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kworthington at gmail.com Tue Nov 13 19:46:43 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 13 Nov 2012 14:46:43 -0500 Subject: nginx-1.2.5 In-Reply-To: <20121113135735.GH40452@mdounin.ru> References: <20121113135735.GH40452@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.2.5 For Windows http://goo.gl/8POBa (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Nov 13, 2012 at 8:57 AM, Maxim Dounin wrote: > Changes with nginx 1.2.5 13 Nov > 2012 > > *) Feature: the "optional_no_ca" parameter of the "ssl_verify_client" > directive. > Thanks to Mike Kazantsev and Eric O'Connor. > > *) Feature: the $bytes_sent, $connection, and $connection_requests > variables can now be used not only in the "log_format" directive. > Thanks to Benjamin Gr?ssing. > > *) Feature: resolver now randomly rotates addresses returned from > cache. > Thanks to Anton Jouline. > > *) Feature: the "auto" parameter of the "worker_processes" directive. > > *) Bugfix: "cache file ... has md5 collision" alert. > > *) Bugfix: OpenSSL 0.9.7 compatibility. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 14 02:44:32 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Nov 2012 06:44:32 +0400 Subject: nginx-1.2.5 In-Reply-To: References: <20121113135735.GH40452@mdounin.ru> Message-ID: <20121114024432.GP40452@mdounin.ru> Hello! On Tue, Nov 13, 2012 at 07:34:03PM +0000, Alex Samad - Yieldbroker wrote: > Have all the ssl features from the development branch been > brought down ? > > Specifically the option of having a ca hint for client certs and > a different chain of ca for verifying ? No, it's still only in 1.3.x branch. -- Maxim Dounin http://nginx.com/support.html From koalay at gmail.com Wed Nov 14 04:21:00 2012 From: koalay at gmail.com (Shu Hung (Koala)) Date: Wed, 14 Nov 2012 12:21:00 +0800 Subject: Need help on WebDAV + GIT server setup Message-ID: Hi all, I wish to install nginx as a WebDAV server to serve my GIT repos. But I failed. I've installed Nginx 1.2.5 to my server with both http_dav_module and nginx-dav-ext-module . Then I configure the server like this: server { listen 80; server_name foobar.com; charset utf-8; dav_methods PUT DELETE MKCOL COPY MOVE; dav_ext_methods PROPFIND OPTIONS; # turn on auth_basic auth_basic "Private area"; auth_basic_user_file "/var/gitrepos/.htpasswd"; create_full_put_path on; dav_access group:rw all:r; location / { root /var/gitrepos; index index.html index.htm; autoindex on; } location ~ /\.ht { deny all; } } The server start successfully. And I've created some repo on the server side. Then when I tried to push something to the server, git reports an error: "error: no DAV locking support on http://user at foobar.com/test-repo/" And when I try to clone a repo that I created and pushed earlier on server, I get this error: "warning: You appear to have cloned an empty repository" I'm confused. Please give me some suggestion. Thanks! Koala -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Nov 14 11:17:37 2012 From: nginx-forum at nginx.us (kalasnjikov) Date: Wed, 14 Nov 2012 06:17:37 -0500 Subject: http_flv_module not working, any idea please In-Reply-To: <790499b9e77ed3d8b16dd5e6cd2e099b.NginxMailingListEnglish@forum.nginx.org> References: <790499b9e77ed3d8b16dd5e6cd2e099b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <12ad4f2f499d0be81baf854f3d290136.NginxMailingListEnglish@forum.nginx.org> Same issue with me on Apache+Nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205650,232792#msg-232792 From nginx-forum at nginx.us Wed Nov 14 14:31:28 2012 From: nginx-forum at nginx.us (gt420hp) Date: Wed, 14 Nov 2012 09:31:28 -0500 Subject: Can NGinx replace Varnish Message-ID: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> We are using Varnish in front of 3 load balanced web servers running apache. We had migrated from one hosting platform where we had 1 app server and 1 database server using Varnish (Drupal 6.x) and had no issues. Now that we are running in a load balanced environment (3 load balanced apache web servers, a Varnish server, and 1 database server) we are seeing mulitple examples of cacheing issues. (Pages not displaying correctly ...style issues, data input staying cached and used on another page, etc). We think we can just replace the Varnish server and use a NGinx server. I don't want to necessarily remove all the apache servers, but we have to get this cacheing issue corrected.... any thoughts...? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232796,232796#msg-232796 From liulantao at gmail.com Wed Nov 14 15:06:05 2012 From: liulantao at gmail.com (Liu Lantao) Date: Wed, 14 Nov 2012 23:06:05 +0800 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: We are making a nginx benchmark under 10Gbe network. For an empty page, we get about 700k rps of nginx, in compare with about 100k rps of resin pro. In caucho's test, they use i7 4 core / 8 HT, 2.8 GHZ, 8Meg Cache, 8 GB RAM, and I use duo intel e5645. I think the result can be improved through some tuning. We tuned server configuration and nginx configuration, but didn't tune much on resin. We didn't find any configuration of caucho's testing, neither nginx nor resin. so i wonder how to make the rps of resin go above 100k? On Sat, Aug 18, 2012 at 3:26 PM, Mike Dupont wrote: > Resin Pro 4.0.29, so whats the point? We are talking about open source > software here, no? > mike > > On Sat, Aug 18, 2012 at 6:39 AM, Adam Zell wrote: > > More details: > > > http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/ > > . > > > > On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont > > wrote: > >> > >> which version of resin did they use, the open source or pro version? > >> mike > >> > >> On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell wrote: > >> > FYI: > >> > > >> > > http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ > >> > > >> > " Using industry standard tool and methodology, Resin Pro web server > was > >> > put > >> > to the test versus Nginx, a popular web server with a reputation for > >> > efficiency and performance. Nginx is known to be faster and more > >> > reliable > >> > under load than the popular Apache HTTPD. Benchmark tests between > Resin > >> > and > >> > Nginx yielded competitive figures, with Resin leading with fewer > errors > >> > and > >> > faster response times. In numerous and varying tests, Resin handled > 20% > >> > to > >> > 25% more load while still outperforming Nginx. In particular, Resin > was > >> > able > >> > to sustain fast response times under extremely heavy load while Nginx > >> > performance degraded. " > >> > > >> > -- > >> > Adam > >> > zellster at gmail.com > >> > > >> > _______________________________________________ > >> > nginx mailing list > >> > nginx at nginx.org > >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> > >> > >> -- > >> James Michael DuPont > >> Member of Free Libre Open Source Software Kosova http://flossk.org > >> Saving wikipedia(tm) articles from deletion > >> http://SpeedyDeletion.wikia.com > >> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org > >> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > -- > > Adam > > zellster at gmail.com > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > James Michael DuPont > Member of Free Libre Open Source Software Kosova http://flossk.org > Saving wikipedia(tm) articles from deletion > http://SpeedyDeletion.wikia.com > Contributor FOSM, the CC-BY-SA map of the world http://fosm.org > Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Liu Lantao EMAIL: liulantao ( at ) gmail ( dot ) com ; WEBSITE: http://www.liulantao.com/portal . -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr.bartosiewicz at firma.gg.pl Wed Nov 14 16:42:49 2012 From: piotr.bartosiewicz at firma.gg.pl (Piotr Bartosiewicz) Date: Wed, 14 Nov 2012 17:42:49 +0100 Subject: Chunked transfer encoding problem Message-ID: <50A3CA09.1080709@firma.gg.pl> Hi, My nginx (1.2.4) config looks like this (relevant part): server { listen 8888; location / { proxy_http_version 1.1; proxy_pass http://localhost:8080; } } Backend server handles GET requests and responds with a large body. Response is generated and sent on the fly, so content-length is not known at the beginning. In normal case everything works fine. But sometimes server catches an exception after a response headers were sent. I've found that there is a commonly used solution to inform a client about incomplite response: use Transfer-Encoding chunked and close socket without sending the last (0 length) chunk. Unfortunately nginx appends termination chunk even when the backend server does not (both nginx and backed connections are http/1.1 and use chunked encoding). Is this expected behavior, bug or maybe there is some option to turn this off? Regards Piotr Bartosiewicz From mdounin at mdounin.ru Wed Nov 14 17:07:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Nov 2012 21:07:35 +0400 Subject: Chunked transfer encoding problem In-Reply-To: <50A3CA09.1080709@firma.gg.pl> References: <50A3CA09.1080709@firma.gg.pl> Message-ID: <20121114170735.GZ40452@mdounin.ru> Hello! On Wed, Nov 14, 2012 at 05:42:49PM +0100, Piotr Bartosiewicz wrote: > Hi, > > My nginx (1.2.4) config looks like this (relevant part): > > server { > listen 8888; > > location / { > proxy_http_version 1.1; > proxy_pass http://localhost:8080; > } > } > > Backend server handles GET requests and responds with a large body. > Response is generated and sent on the fly, so content-length is not > known at the beginning. > In normal case everything works fine. > > But sometimes server catches an exception after a response headers > were sent. > I've found that there is a commonly used solution to inform a client > about incomplite response: > use Transfer-Encoding chunked and close socket without sending the > last (0 length) chunk. > Unfortunately nginx appends termination chunk even when the backend > server does not > (both nginx and backed connections are http/1.1 and use chunked encoding). > > Is this expected behavior, bug or maybe there is some option to turn > this off? This is sort of known bug. Fixing it would require relatively large cleanup of upstream module. -- Maxim Dounin http://nginx.com/support.html From appa at perusio.net Wed Nov 14 17:39:41 2012 From: appa at perusio.net (=?UTF-8?B?QW50w7NuaW8=?= P. P. Almeida) Date: Wed, 14 Nov 2012 18:39:41 +0100 Subject: Can NGinx replace Varnish In-Reply-To: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> References: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87bof0f5ia.wl%appa@perusio.net> On 14 Nov 2012 15h31 CET, nginx-forum at nginx.us wrote: > We are using Varnish in front of 3 load balanced web servers running > apache. We had migrated from one hosting platform where we had 1 > app server and 1 database server using Varnish (Drupal 6.x) and had > no issues. Now that we are running in a load balanced environment > (3 load balanced apache web servers, a Varnish server, and 1 > database server) we are seeing mulitple examples of cacheing > issues. (Pages not displaying correctly ...style issues, data input > staying cached and used on another page, etc). You can drop Varnish from the picture if something like microcaching suits you or you use ngx_cache_purge with the purge module. It depends if you have an active invalidation strategy or not. Either way Nginx can replace Varnish and work also as load balancer. So you'll have a simpler stack. > We think we can just replace the Varnish server and use a NGinx > server. I don't want to necessarily remove all the apache servers, > but we have to get this cacheing issue corrected.... > > any thoughts...? Yep. See above. For Drupal related Nginx issues there's a GDO group: http://groups.drupal.org/nginx if want to delve deeper into the issue. --- appa From francis at daoine.org Wed Nov 14 18:32:53 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 14 Nov 2012 18:32:53 +0000 Subject: Can NGinx replace Varnish In-Reply-To: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> References: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121114183253.GI24351@craic.sysops.org> On Wed, Nov 14, 2012 at 09:31:28AM -0500, gt420hp wrote: Hi there, > we are seeing mulitple > examples of cacheing issues. (Pages not displaying correctly ...style > issues, data input staying cached and used on another page, etc). > > We think we can just replace the Varnish server and use a NGinx server. I > don't want to necessarily remove all the apache servers, but we have to get > this cacheing issue corrected.... If the caching issues are because your backend servers are configured incorrectly, merely replacing Varnish with nginx is unlikely to fix everything. If they are because your Varnish is configured incorrectly, then replacing an incorrectly-configured Varnish with a correctly-configured nginx probably will help. But replacing it with a correctly-configured Varnish would probably also help. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Nov 14 21:09:44 2012 From: nginx-forum at nginx.us (shmapty) Date: Wed, 14 Nov 2012 16:09:44 -0500 Subject: proxy_cache_valid for zero seconds Message-ID: Greetings, I am trying to configure nginx proxy_cache so that it stores a cached copy of a HTTP response, but serves from cache *only* under the conditions defined by proxy_cache_use_stale. I have tried something like this without success: proxy_cache_valid 200 204 301 302 0s; proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_504; "0s" appears to avoid caching completely. "1s" stores a cached copy, but presumably serves from cache for one second. I am trying to serve from cache only when the upstream errs. Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232815,232815#msg-232815 From nginx-forum at nginx.us Wed Nov 14 23:26:55 2012 From: nginx-forum at nginx.us (Cancer) Date: Wed, 14 Nov 2012 18:26:55 -0500 Subject: Connection reset by peer on first request Message-ID: Hi, I'm using Nginx with php-cgi. A problem arose recently where if you have not used my site for a few minutes and then go to it, the first request is always 'connection reset by peer'. If you refresh, everything functions normally until you leave for a few minutes and go to another link. It happens 100% of the time. Does anyone know what could be the problem? All I get in error log with debug on are these messages: 2012/11/14 17:25:46 [info] 3454#0: *2516 client prematurely closed connection while reading client request line, client: *, server: domain.com Also, these coincide with the 400 bad request errors in access.log. I have tried restart dns server, nginx, php-cgi, etc, but to no avail. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232817,232817#msg-232817 From tony at secondrise.com Thu Nov 15 00:00:35 2012 From: tony at secondrise.com (Tony Curwen) Date: Wed, 14 Nov 2012 20:00:35 -0400 Subject: unsubscribe In-Reply-To: References: Message-ID: <119DA831-1CF1-45F5-B3D2-9354138316FE@secondrise.com> unsubscribe On Nov 14, 2012, at 7:26 PM, nginx-request at nginx.org wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Can NGinx replace Varnish (gt420hp) > 2. Re: Caucho Resin: faster than nginx? (Liu Lantao) > 3. Chunked transfer encoding problem (Piotr Bartosiewicz) > 4. Re: Chunked transfer encoding problem (Maxim Dounin) > 5. Re: Can NGinx replace Varnish (Ant?nio P. P. Almeida) > 6. Re: Can NGinx replace Varnish (Francis Daly) > 7. proxy_cache_valid for zero seconds (shmapty) > 8. Connection reset by peer on first request (Cancer) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 14 Nov 2012 09:31:28 -0500 > From: "gt420hp" > To: nginx at nginx.org > Subject: Can NGinx replace Varnish > Message-ID: > <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > We are using Varnish in front of 3 load balanced web servers running apache. > We had migrated from one hosting platform where we had 1 app server and 1 > database server using Varnish (Drupal 6.x) and had no issues. Now that we > are running in a load balanced environment (3 load balanced apache web > servers, a Varnish server, and 1 database server) we are seeing mulitple > examples of cacheing issues. (Pages not displaying correctly ...style > issues, data input staying cached and used on another page, etc). > > We think we can just replace the Varnish server and use a NGinx server. I > don't want to necessarily remove all the apache servers, but we have to get > this cacheing issue corrected.... > > any thoughts...? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232796,232796#msg-232796 > > > > ------------------------------ > > Message: 2 > Date: Wed, 14 Nov 2012 23:06:05 +0800 > From: Liu Lantao > To: nginx at nginx.org > Subject: Re: Caucho Resin: faster than nginx? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > We are making a nginx benchmark under 10Gbe network. For an empty page, we > get about 700k rps of nginx, in compare with about 100k rps of resin pro. > > In caucho's test, they use i7 4 core / 8 HT, 2.8 GHZ, 8Meg Cache, 8 GB RAM, > and I use duo intel e5645. I think the result can be improved through some > tuning. > > We tuned server configuration and nginx configuration, but didn't tune much > on resin. We didn't find any configuration of caucho's testing, neither > nginx nor resin. so i wonder how to make the rps of resin go above 100k? > > On Sat, Aug 18, 2012 at 3:26 PM, Mike Dupont > wrote: > >> Resin Pro 4.0.29, so whats the point? We are talking about open source >> software here, no? >> mike >> >> On Sat, Aug 18, 2012 at 6:39 AM, Adam Zell wrote: >>> More details: >>> >> http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/ >>> . >>> >>> On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont >>> wrote: >>>> >>>> which version of resin did they use, the open source or pro version? >>>> mike >>>> >>>> On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell wrote: >>>>> FYI: >>>>> >>>>> >> http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/ >>>>> >>>>> " Using industry standard tool and methodology, Resin Pro web server >> was >>>>> put >>>>> to the test versus Nginx, a popular web server with a reputation for >>>>> efficiency and performance. Nginx is known to be faster and more >>>>> reliable >>>>> under load than the popular Apache HTTPD. Benchmark tests between >> Resin >>>>> and >>>>> Nginx yielded competitive figures, with Resin leading with fewer >> errors >>>>> and >>>>> faster response times. In numerous and varying tests, Resin handled >> 20% >>>>> to >>>>> 25% more load while still outperforming Nginx. In particular, Resin >> was >>>>> able >>>>> to sustain fast response times under extremely heavy load while Nginx >>>>> performance degraded. " >>>>> >>>>> -- >>>>> Adam >>>>> zellster at gmail.com >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> >>>> -- >>>> James Michael DuPont >>>> Member of Free Libre Open Source Software Kosova http://flossk.org >>>> Saving wikipedia(tm) articles from deletion >>>> http://SpeedyDeletion.wikia.com >>>> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org >>>> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> >>> -- >>> Adam >>> zellster at gmail.com >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> James Michael DuPont >> Member of Free Libre Open Source Software Kosova http://flossk.org >> Saving wikipedia(tm) articles from deletion >> http://SpeedyDeletion.wikia.com >> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org >> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Liu Lantao > EMAIL: liulantao ( at ) gmail ( dot ) com ; > WEBSITE: http://www.liulantao.com/portal . > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 3 > Date: Wed, 14 Nov 2012 17:42:49 +0100 > From: Piotr Bartosiewicz > To: nginx at nginx.org > Subject: Chunked transfer encoding problem > Message-ID: <50A3CA09.1080709 at firma.gg.pl> > Content-Type: text/plain; charset=UTF-8; format=flowed > > Hi, > > My nginx (1.2.4) config looks like this (relevant part): > > server { > listen 8888; > > location / { > proxy_http_version 1.1; > proxy_pass http://localhost:8080; > } > } > > Backend server handles GET requests and responds with a large body. > Response is generated and sent on the fly, so content-length is not > known at the beginning. > In normal case everything works fine. > > But sometimes server catches an exception after a response headers were > sent. > I've found that there is a commonly used solution to inform a client > about incomplite response: > use Transfer-Encoding chunked and close socket without sending the last > (0 length) chunk. > Unfortunately nginx appends termination chunk even when the backend > server does not > (both nginx and backed connections are http/1.1 and use chunked encoding). > > Is this expected behavior, bug or maybe there is some option to turn > this off? > > Regards > Piotr Bartosiewicz > > > > ------------------------------ > > Message: 4 > Date: Wed, 14 Nov 2012 21:07:35 +0400 > From: Maxim Dounin > To: nginx at nginx.org > Subject: Re: Chunked transfer encoding problem > Message-ID: <20121114170735.GZ40452 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Wed, Nov 14, 2012 at 05:42:49PM +0100, Piotr Bartosiewicz wrote: > >> Hi, >> >> My nginx (1.2.4) config looks like this (relevant part): >> >> server { >> listen 8888; >> >> location / { >> proxy_http_version 1.1; >> proxy_pass http://localhost:8080; >> } >> } >> >> Backend server handles GET requests and responds with a large body. >> Response is generated and sent on the fly, so content-length is not >> known at the beginning. >> In normal case everything works fine. >> >> But sometimes server catches an exception after a response headers >> were sent. >> I've found that there is a commonly used solution to inform a client >> about incomplite response: >> use Transfer-Encoding chunked and close socket without sending the >> last (0 length) chunk. >> Unfortunately nginx appends termination chunk even when the backend >> server does not >> (both nginx and backed connections are http/1.1 and use chunked encoding). >> >> Is this expected behavior, bug or maybe there is some option to turn >> this off? > > This is sort of known bug. Fixing it would require relatively > large cleanup of upstream module. > > -- > Maxim Dounin > http://nginx.com/support.html > > > > ------------------------------ > > Message: 5 > Date: Wed, 14 Nov 2012 18:39:41 +0100 > From: Ant?nio P. P. Almeida > To: nginx at nginx.org > Subject: Re: Can NGinx replace Varnish > Message-ID: <87bof0f5ia.wl%appa at perusio.net> > Content-Type: text/plain; charset=US-ASCII > > On 14 Nov 2012 15h31 CET, nginx-forum at nginx.us wrote: > >> We are using Varnish in front of 3 load balanced web servers running >> apache. We had migrated from one hosting platform where we had 1 >> app server and 1 database server using Varnish (Drupal 6.x) and had >> no issues. Now that we are running in a load balanced environment >> (3 load balanced apache web servers, a Varnish server, and 1 >> database server) we are seeing mulitple examples of cacheing >> issues. (Pages not displaying correctly ...style issues, data input >> staying cached and used on another page, etc). > > You can drop Varnish from the picture if something like microcaching > suits you or you use ngx_cache_purge with the purge module. It depends > if you have an active invalidation strategy or not. Either way Nginx > can replace Varnish and work also as load balancer. So you'll have a > simpler stack. > >> We think we can just replace the Varnish server and use a NGinx >> server. I don't want to necessarily remove all the apache servers, >> but we have to get this cacheing issue corrected.... >> >> any thoughts...? > > Yep. See above. For Drupal related Nginx issues there's a GDO group: > > http://groups.drupal.org/nginx > > if want to delve deeper into the issue. > > --- appa > > > > ------------------------------ > > Message: 6 > Date: Wed, 14 Nov 2012 18:32:53 +0000 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: Can NGinx replace Varnish > Message-ID: <20121114183253.GI24351 at craic.sysops.org> > Content-Type: text/plain; charset=us-ascii > > On Wed, Nov 14, 2012 at 09:31:28AM -0500, gt420hp wrote: > > Hi there, > >> we are seeing mulitple >> examples of cacheing issues. (Pages not displaying correctly ...style >> issues, data input staying cached and used on another page, etc). >> >> We think we can just replace the Varnish server and use a NGinx server. I >> don't want to necessarily remove all the apache servers, but we have to get >> this cacheing issue corrected.... > > If the caching issues are because your backend servers are configured > incorrectly, merely replacing Varnish with nginx is unlikely to fix > everything. > > If they are because your Varnish is configured incorrectly, then > replacing an incorrectly-configured Varnish with a correctly-configured > nginx probably will help. But replacing it with a correctly-configured > Varnish would probably also help. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > > > ------------------------------ > > Message: 7 > Date: Wed, 14 Nov 2012 16:09:44 -0500 > From: "shmapty" > To: nginx at nginx.org > Subject: proxy_cache_valid for zero seconds > Message-ID: > > > Content-Type: text/plain; charset=UTF-8 > > Greetings, > > I am trying to configure nginx proxy_cache so that it stores a cached copy > of a HTTP response, but serves from cache *only* under the conditions > defined by proxy_cache_use_stale. > > I have tried something like this without success: > > proxy_cache_valid 200 204 301 302 0s; > proxy_cache_use_stale error timeout updating invalid_header > http_500 http_502 http_504; > > "0s" appears to avoid caching completely. "1s" stores a cached copy, but > presumably serves from cache for one second. I am trying to serve from > cache only when the upstream errs. > > Thank you > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232815,232815#msg-232815 > > > > ------------------------------ > > Message: 8 > Date: Wed, 14 Nov 2012 18:26:55 -0500 > From: "Cancer" > To: nginx at nginx.org > Subject: Connection reset by peer on first request > Message-ID: > > > Content-Type: text/plain; charset=UTF-8 > > Hi, > > I'm using Nginx with php-cgi. A problem arose recently where if you have > not used my site for a few minutes and then go to it, the first request is > always 'connection reset by peer'. If you refresh, everything functions > normally until you leave for a few minutes and go to another link. It > happens 100% of the time. Does anyone know what could be the problem? > > All I get in error log with debug on are these messages: > 2012/11/14 17:25:46 [info] 3454#0: *2516 client prematurely closed > connection while reading client request line, client: *, server: domain.com > > Also, these coincide with the 400 bad request errors in access.log. I have > tried restart dns server, nginx, php-cgi, etc, but to no avail. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232817,232817#msg-232817 > > > > ------------------------------ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > End of nginx Digest, Vol 37, Issue 28 > ************************************* -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3634 bytes Desc: not available URL: From stef at scaleengine.com Thu Nov 15 00:37:07 2012 From: stef at scaleengine.com (Stefan Caunter) Date: Wed, 14 Nov 2012 19:37:07 -0500 Subject: Can NGinx replace Varnish In-Reply-To: <87bof0f5ia.wl%appa@perusio.net> References: <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org> <87bof0f5ia.wl%appa@perusio.net> Message-ID: Sorry but what does this have to do with your choice of caching solution? I've used nginx for 8 years, and varnish for 4 years. Solution does not matter. Implementation is everything. If the reverse proxy is not told to stick to a back end based on client ip, you will see this behaviour regardless of solution. You need to sort out your varnish configuration. Replacing it with nginx without a complete understanding of your webapp and client sessions isn't going to do anything. This kind of post, implying that software implementations of RFCs are somehow the issue in misconfigurations, needs to be called out immediately. Both nginx and varnish implement the HTTP RFCs, and they do it very well. Learn the interfaces to configurations, and use them. It's ridiculous to imply that nginx or varnish is a better implementation without objective supporting evidence that can be openly discussed. ---- Stefan Caunter CEO, ScaleEngine Inc. "Streaming, CDN, and Internet Logistics" E: stef at scaleengine.com Toronto: +1 647 459 9475 +1 800 224 0192 On Wed, Nov 14, 2012 at 12:39 PM, Ant?nio P. P. Almeida wrote: > On 14 Nov 2012 15h31 CET, nginx-forum at nginx.us wrote: > >> We are using Varnish in front of 3 load balanced web servers running >> apache. We had migrated from one hosting platform where we had 1 >> app server and 1 database server using Varnish (Drupal 6.x) and had >> no issues. Now that we are running in a load balanced environment >> (3 load balanced apache web servers, a Varnish server, and 1 >> database server) we are seeing mulitple examples of cacheing >> issues. (Pages not displaying correctly ...style issues, data input >> staying cached and used on another page, etc). > > You can drop Varnish from the picture if something like microcaching > suits you or you use ngx_cache_purge with the purge module. It depends > if you have an active invalidation strategy or not. Either way Nginx > can replace Varnish and work also as load balancer. So you'll have a > simpler stack. > >> We think we can just replace the Varnish server and use a NGinx >> server. I don't want to necessarily remove all the apache servers, >> but we have to get this cacheing issue corrected.... >> >> any thoughts...? > > Yep. See above. For Drupal related Nginx issues there's a GDO group: > > http://groups.drupal.org/nginx > > if want to delve deeper into the issue. > > --- appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Nov 15 09:13:29 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2012 13:13:29 +0400 Subject: http_flv_module not working, any idea please In-Reply-To: <12ad4f2f499d0be81baf854f3d290136.NginxMailingListEnglish@forum.nginx.org> References: <790499b9e77ed3d8b16dd5e6cd2e099b.NginxMailingListEnglish@forum.nginx.org> <12ad4f2f499d0be81baf854f3d290136.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121115091329.GD40452@mdounin.ru> Hello! On Wed, Nov 14, 2012 at 06:17:37AM -0500, kalasnjikov wrote: > Same issue with me on Apache+Nginx. If you see flv pseudo streaming module not working - most likely you didn't switch it on. As you mention Apache, you likely use proxy_pass instead. Note that flv and proxy_pass are mutually exclusive. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Nov 15 09:24:31 2012 From: nginx-forum at nginx.us (kalasnjikov) Date: Thu, 15 Nov 2012 04:24:31 -0500 Subject: http_flv_module not working, any idea please In-Reply-To: <20121115091329.GD40452@mdounin.ru> References: <20121115091329.GD40452@mdounin.ru> Message-ID: <253fcdffe1340268cf6450248fda3526.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Nov 14, 2012 at 06:17:37AM -0500, kalasnjikov wrote: > > > Same issue with me on Apache+Nginx. > > If you see flv pseudo streaming module not working - most likely > you didn't switch it on. As you mention Apache, you likely use > proxy_pass instead. Note that flv and proxy_pass are mutually > exclusive. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx At last! I'm using proxy_pass and bumping my head against the wall for two days! Thanks a LOT! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205650,232830#msg-232830 From piotr.bartosiewicz at firma.gg.pl Thu Nov 15 10:13:25 2012 From: piotr.bartosiewicz at firma.gg.pl (Piotr Bartosiewicz) Date: Thu, 15 Nov 2012 11:13:25 +0100 Subject: Chunked transfer encoding problem In-Reply-To: <20121114170735.GZ40452@mdounin.ru> References: <50A3CA09.1080709@firma.gg.pl> <20121114170735.GZ40452@mdounin.ru> Message-ID: <50A4C045.8030202@firma.gg.pl> W dniu 14.11.2012 18:07, Maxim Dounin pisze: > Hello! > > On Wed, Nov 14, 2012 at 05:42:49PM +0100, Piotr Bartosiewicz wrote: > >> Hi, >> >> My nginx (1.2.4) config looks like this (relevant part): >> >> server { >> listen 8888; >> >> location / { >> proxy_http_version 1.1; >> proxy_pass http://localhost:8080; >> } >> } >> >> Backend server handles GET requests and responds with a large body. >> Response is generated and sent on the fly, so content-length is not >> known at the beginning. >> In normal case everything works fine. >> >> But sometimes server catches an exception after a response headers >> were sent. >> I've found that there is a commonly used solution to inform a client >> about incomplite response: >> use Transfer-Encoding chunked and close socket without sending the >> last (0 length) chunk. >> Unfortunately nginx appends termination chunk even when the backend >> server does not >> (both nginx and backed connections are http/1.1 and use chunked encoding). >> >> Is this expected behavior, bug or maybe there is some option to turn >> this off? > This is sort of known bug. Fixing it would require relatively > large cleanup of upstream module. > Thanks for the answer! Is this expected to be fixed in 1.3 version? I've found 'Upstream code cleanup' entry in the roadmap, but no ticket in trac for this. -- Piotr Bartosiewicz From nginx-forum at nginx.us Thu Nov 15 15:55:51 2012 From: nginx-forum at nginx.us (tatroc) Date: Thu, 15 Nov 2012 10:55:51 -0500 Subject: client request body is buffered to a temporary file Message-ID: I am using nginx as a load balancer I keep seeing the messages below. I keep uping the client_body_buffer_size, proxy_buffer_size and proxy_buffers but it doesn't seem to make a difference. What could be happening here? 2012/11/15 09:52:30 [warn] 6559#0: *46179 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000002573, client: 10.196.3.134, server: loadbalancer.domain.net, request: "POST /Order/OrderSubmission/OrderSubmission.svc HTTP/1.1", host: "loadbalancer.domain.net" upstream secure { server node1.domain.net:443 weight=10 max_fails=3 fail_timeout=3s; server node2.domain.net:443 weight=10 max_fails=3 fail_timeout=3s; } server { listen 10.192.145.60:443; ssl on; ssl_certificate /etc/ssl/certs/cert.pem; ssl_certificate_key /etc/ssl/certs/cert_priv_key_nopass.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH; ssl_prefer_server_ciphers on; server_name loadbalancer.hsz.eastbay.net; access_log /var/log/nginx/lb.access.log main; error_log /var/log/nginx/lb.error.log debug; location / { index index.html proxy_buffering on client_body_buffer_size 10m; proxy_buffer_size 64k; proxy_buffers 2048 64k; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; proxy_pass https://secure; } #end location } #end server Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232847,232847#msg-232847 From nginx-forum at nginx.us Thu Nov 15 16:43:11 2012 From: nginx-forum at nginx.us (mahhy) Date: Thu, 15 Nov 2012 11:43:11 -0500 Subject: proxy_pass, dynamic ports upstream Message-ID: I have a requirement to be able to map a portion of a request URI to a port on a set of upstream servers. I'm hoping nginx will be able to solve this for me, but so far no luck. Request: http://example.com/2201/reg/106903/0?something=here&somemore=stuff Needs to be proxied to: http://10.11.12.13:2201/reg/106903/0?something=here&somemore=stuff So the 1st portion of the URI is used as the upstreams port. However I'm having difficulty with this when I attempt to combine it with a set of upstream servers. The below configuration results in the error "no resolver defined to resolve engines". If I specify the upstream as " proxy_pass http://10.11.12.13:$1/$2/?$args; " (and change nothing else) the below configuration works... however it's not being load balanced obviously. Basically, can I load balance and use dynamic ports? Configuration: upstream engines { server 10.11.12.13; server 10.11.12.14; } server { listen *:80; access_log /var/log/nginx/engines.access.log main; error_log /var/log/nginx/engines.error.log debug; location ~/([0-9]*)/ { rewrite ^/([0-9]*)/(.*)$ $2 break; proxy_pass http://engines:$1/$2/?$args; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232849,232849#msg-232849 From mdounin at mdounin.ru Thu Nov 15 16:59:28 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Nov 2012 20:59:28 +0400 Subject: client request body is buffered to a temporary file In-Reply-To: References: Message-ID: <20121115165928.GJ40452@mdounin.ru> Hello! On Thu, Nov 15, 2012 at 10:55:51AM -0500, tatroc wrote: > I am using nginx as a load balancer I keep seeing the messages below. I keep > uping the client_body_buffer_size, proxy_buffer_size and proxy_buffers but > it doesn't seem to make a difference. What could be happening here? > > > 2012/11/15 09:52:30 [warn] 6559#0: *46179 a client request body is buffered > to a temporary file /var/cache/nginx/client_temp/0000002573, client: > 10.196.3.134, server: loadbalancer.domain.net, request: "POST > /Order/OrderSubmission/OrderSubmission.svc HTTP/1.1", host: > "loadbalancer.domain.net" As long as request body size is greater than client_body_buffer_size, it will be buffered to disk and the above message will be shown. If you think you see the message on requests with body size less than client_body_buffer_size, you may want to provide more details as specified here: http://wiki.nginx.org/Debugging#Asking_for_help [...] -- Maxim Dounin http://nginx.com/support.html From contact at jpluscplusm.com Thu Nov 15 17:25:15 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 15 Nov 2012 17:25:15 +0000 Subject: proxy_pass, dynamic ports upstream In-Reply-To: References: Message-ID: On 15 November 2012 16:43, mahhy wrote: > I have a requirement to be able to map a portion of a request URI to a port > on a set of upstream servers. I'm hoping nginx will be able to solve this > for me, but so far no luck. Use a map: http://nginx.org/r/map You'll probably want to use regular expressions, and run the map based off $request_uri or $uri. Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From nginx-forum at nginx.us Thu Nov 15 19:33:18 2012 From: nginx-forum at nginx.us (mahhy) Date: Thu, 15 Nov 2012 14:33:18 -0500 Subject: proxy_pass, dynamic ports upstream In-Reply-To: References: Message-ID: While I can easily build a map as such: map $uri $engineport { ~^/2201/ 2201; ~^/2202/ 2202; } This still doesn't seem to work with a set of load balanced upstream servers. Trying to append a port onto the proxy_pass directive when used with an upstream server group fails with the resolver related message in my orignal post. With a single server upstream this can be done, however I want to round robin/loadbalance over 2 or more... - mahhy Jonathan Matthews Wrote: ------------------------------------------------------- > On 15 November 2012 16:43, mahhy wrote: > > I have a requirement to be able to map a portion of a request URI to > a port > > on a set of upstream servers. I'm hoping nginx will be able to > solve this > > for me, but so far no luck. > > Use a map: http://nginx.org/r/map > > You'll probably want to use regular expressions, and run the map based > off $request_uri or $uri. > > Jonathan > -- > Jonathan Matthews // Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232849,232858#msg-232858 From 82404310 at qq.com Fri Nov 16 03:20:51 2012 From: 82404310 at qq.com (=?utf-8?B?TGlMaQ==?=) Date: Fri, 16 Nov 2012 11:20:51 +0800 Subject: proxy pass ajax problem Message-ID: An HTML attachment was scrubbed... URL: From vatipa at gmail.com Fri Nov 16 06:28:45 2012 From: vatipa at gmail.com (Landon Loucel) Date: Fri, 16 Nov 2012 00:28:45 -0600 Subject: rewrite or internal redirection cycle while internally redirecting to "/error/403.html" Message-ID: When accessing a vhost on my server I receive a 500 internal server error and then when viewing the log file I find the error rewrite or internal redirection cycle while internally redirecting to "/error/403.html". I have a site working just fine with the exact same vhost configuration. All paths in the config are valid and all permissions are valid and correct. Below is the configuration for the vhost in question. Any assistance would be greatly appreciated. server { listen *:80; server_name somedomain.tld *somedomain.tld; root /var/www/somedomain.tld/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; error_page 400 /error/400.html; error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 /error/500.html; error_page 502 /error/502.html; error_page 503 /error/503.html; recursive_error_pages on; location = /error/400.html { internal; } location = /error/401.html { internal; } location = /error/403.html { internal; } location = /error/404.html { internal; } location = /error/405.html { internal; } location = /error/500.html { internal; } location = /error/502.html { internal; } location = /error/503.html { internal; } error_log /var/log/ispconfig/httpd/somedomain.tld/error.log; access_log /var/log/ispconfig/httpd/somedomain.tld/access.log combined; ## Disable .htaccess and other hidden files location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/clients/client3/web3/.htpasswd_stats; } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web3.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; } } Thank You, Landon L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Fri Nov 16 08:22:46 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 16 Nov 2012 08:22:46 +0000 Subject: proxy pass ajax problem In-Reply-To: References: Message-ID: Why don't you show us how far you've got, what problems you ran into, and what problems you're still having? Jonathan -- Jonathan Matthews // Oxford, London, UK http://www.jpluscplusm.com/contact.html From mdounin at mdounin.ru Fri Nov 16 08:27:41 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2012 12:27:41 +0400 Subject: rewrite or internal redirection cycle while internally redirecting to "/error/403.html" In-Reply-To: References: Message-ID: <20121116082741.GM40452@mdounin.ru> Hello! On Fri, Nov 16, 2012 at 12:28:45AM -0600, Landon Loucel wrote: > When accessing a vhost on my server I receive a 500 internal server error > and then when viewing the log file I find the error rewrite or internal > redirection cycle while internally redirecting to "/error/403.html". I > have a site working just fine with the exact same vhost configuration. All > paths in the config are valid and all permissions are valid and correct. > Below is the configuration for the vhost in question. Any assistance would > be greatly appreciated. > > server { > listen *:80; [...] > error_page 400 /error/400.html; > error_page 401 /error/401.html; > error_page 403 /error/403.html; > error_page 404 /error/404.html; > error_page 405 /error/405.html; > error_page 500 /error/500.html; > error_page 502 /error/502.html; > error_page 503 /error/503.html; > recursive_error_pages on; You activated "recursive_error_pages", hence any error which will in turn result in the same error again will cause infinite loop. With 403 it's trivial to cause this effect e.g. by configuring insufficient permissions on document root (any path componenent of it). That's, in particular, one of the reasons why recursive_error_pages is off by default, and it's not recommended to change this unless you understand what are you doing. [...] -- Maxim Dounin http://nginx.com/support.html From r at roze.lv Fri Nov 16 08:36:29 2012 From: r at roze.lv (Reinis Rozitis) Date: Fri, 16 Nov 2012 10:36:29 +0200 Subject: proxy pass ajax problem In-Reply-To: References: Message-ID: <84F197B85CC348A89A6D434927520E0E@NeiRoze> > hi all > I am using nginx as a reverse proxy for neo4j(you can assume it as a > http server). The problem is web page return by neo4j has ajax request > which is sent to neo4j server. > e.g. the neo4j server is localhost:7474 . and nginx is bind to 8000 > http://domain:8000/webadmin/ will send to http://domain:7474/webadmin/ > but the returned page will send ajax request with > http://domain:7474/ajax-path > I want to the ajax request is also sent to > http://domain:8000/ajax-path. How to do this? > Thanks. The problem most likely is that the neo4j server is using it's own servername/port for the url creation in the html. Depending on the situation there are few ways you could fix/work arround it: 1. Change the neo4j/application source to use relative paths eg instead of http://domain:7474/ajax-path just /ajax-path 2. nginx can override location headers by using proxy_redirect ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect ) 3. If you are unable to change the generated output from the neo4j backend you could try to use the Sub module ( http://wiki.nginx.org/HttpSubModule ) in the proxy_pass location with something like: sub_filter http://domain:7474 http://domain:8000; .. and let nginx alter the source on the fly (though this isnt the best solution from performance aspect / also there are some caveats if the response from backend is compressed etc). rr From mdounin at mdounin.ru Fri Nov 16 11:47:11 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2012 15:47:11 +0400 Subject: Chunked Transfer In-Reply-To: References: <508AF986.2040402@comcast.net> <20121029100821.GI40452@mdounin.ru> Message-ID: <20121116114711.GP40452@mdounin.ru> Hello! On Mon, Oct 29, 2012 at 10:23:45AM +0000, Jonathan Matthews wrote: > On 29 October 2012 10:08, Maxim Dounin wrote: > > Hello! > > > > On Fri, Oct 26, 2012 at 04:58:46PM -0400, AJ Weber wrote: > > > >> Asking, because the documentation looks like it's a little outdated > >> on this... > >> > >> Is Chunked Transfer still not enabled OOTB? This would seem like > >> almost a mandatory feature of HTTP 1.1 to implement, and the only > >> reference I could find is to separate source code/module/patch that > >> I would have to download and recompile all of nginx for? > >> > >> Has it been implemented or added to the default, pre-compiled > >> packages and I just can't see it in the nginx -V output? > >> > >> I need the ability to upload large content, and this would appear to > >> be the proper way to do that. > > > > I'm working on it, and it's expected to be available later this > > month. > > > > BTW, if you know examples of real-world use of chunked transfer > > encoding by clients - please let me know. AFAIK no browsers use > > it, and most widespread example I'm aware of is the webdav client > > in Mac OS X. > > I wanted to use it in this report-generating pipeline: > > bash$ mysql -e 'generate lots of data' | perl 'do some munging' | > csvtool 'make a proper CSV' | gzip | curl --upload-file - > http://my.webdav.endpoint.for.customers/foo/bar.gz > > The fact that the chunkin module wouldn't work properly with webdav > due to curl choosing chunking when PUTting stdin, meant I had to break > this into multiple parts and ensure that curl could upload a complete > file from disk. I mentioned the issue here: > http://mailman.nginx.org/pipermail/nginx/2012-April/033141.html > > This increased the complexity of what I was doing, as I now had local > state on disk that had to be managed/cleaned-up/etc. JFYI, chunked request body patches are available here: http://mailman.nginx.org/pipermail/nginx-devel/2012-November/002961.html Review and testing appreciated. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 16 14:15:01 2012 From: nginx-forum at nginx.us (pliljenberg) Date: Fri, 16 Nov 2012 09:15:01 -0500 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout Message-ID: We're using nginx as a loadbalancer and we're seeing some strange behaviour when one of our backend servers takes a long time to respond to a request. We have a configuration like this: upstream handlehttp { ip_hash; server XXX max_fails=3 fail_timeout=30s; server YYY max_fails=3 fail_timeout=30s; } server { location / { try_files $uri @backend; } location @backend { proxy_pass http://handlehttp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; proxy_read_timeout 300; } } What we thought we had configured was: If one backend server fails more than 3 times within 30 seconds it would be considered disabled and all requests sent to the other backend server (the original server getting request after 30 seconds again). What we're actually seeing is that if a a request takes 300+ seconds, the backend is immediately set as disabled and all further requests are send to the other backend... Are we missing something or is this the correct behaviour for nginx? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232912,232912#msg-232912 From mdounin at mdounin.ru Fri Nov 16 15:30:30 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2012 19:30:30 +0400 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: References: Message-ID: <20121116153030.GX40452@mdounin.ru> Hello! On Fri, Nov 16, 2012 at 09:15:01AM -0500, pliljenberg wrote: > We're using nginx as a loadbalancer and we're seeing some strange behaviour > when one of our backend servers takes a long time to respond to a request. > We have a configuration like this: > > upstream handlehttp { > ip_hash; > server XXX max_fails=3 fail_timeout=30s; > server YYY max_fails=3 fail_timeout=30s; > } > > server { > location / { > try_files $uri @backend; > } > > location @backend { > proxy_pass http://handlehttp; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_next_upstream error timeout invalid_header http_500 http_502 > http_503; > proxy_read_timeout 300; > } > } > > What we thought we had configured was: > If one backend server fails more than 3 times within 30 seconds it would > be considered disabled and all requests sent to the other backend server > (the original server getting request after 30 seconds again). This is what's expected. Note though, that after the problem was detected things are handled a bit differently, see below. > What we're actually seeing is that if a a request takes 300+ seconds, the > backend is immediately set as disabled and all further requests are send to > the other backend... > Are we missing something or is this the correct behaviour for nginx? Are you looking at the normally working backend server, or a server which was already considered down? Note that after nginx 1.1.6 at least one request per worker have to succeed before "3 times withing 30 seconds" will start to apply again: *) Change: if a server in an upstream failed, only one request will be sent to it after fail_timeout; the server will be considered alive if it will successfully respond to the request. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 16 15:54:51 2012 From: nginx-forum at nginx.us (pliljenberg) Date: Fri, 16 Nov 2012 10:54:51 -0500 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: <20121116153030.GX40452@mdounin.ru> References: <20121116153030.GX40452@mdounin.ru> Message-ID: <31c75517fa2605752f965f2752c065be.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply. >> What we're actually seeing is that if a a request takes 300+ seconds, the >> backend is immediately set as disabled and all further requests are send to >> the other backend... >> Are we missing something or is this the correct behaviour for nginx? >Are you looking at the normally working backend server, or a >server which was already considered down? One server X receives a request which takes 300+ seconds to complete . That request gets dropped by nginx due to the read timeout (as expected). When this happens the server X is disabled and all upcoming request are sent to server Y instead. My interpretation of the configuration was that the server X would still get requests since it only had 1 failure (and it 3 as configured) during the last 30 seconds? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232912,232917#msg-232917 From mdounin at mdounin.ru Fri Nov 16 16:32:44 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2012 20:32:44 +0400 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: <31c75517fa2605752f965f2752c065be.NginxMailingListEnglish@forum.nginx.org> References: <20121116153030.GX40452@mdounin.ru> <31c75517fa2605752f965f2752c065be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121116163244.GA40452@mdounin.ru> Hello! On Fri, Nov 16, 2012 at 10:54:51AM -0500, pliljenberg wrote: > Thanks for the reply. > > >> What we're actually seeing is that if a a request takes 300+ seconds, > the > >> backend is immediately set as disabled and all further requests are send > to > >> the other backend... > >> Are we missing something or is this the correct behaviour for nginx? > > >Are you looking at the normally working backend server, or a > >server which was already considered down? > > One server X receives a request which takes 300+ seconds to complete . That > request gets dropped by nginx due to the read timeout (as expected). > When this happens the server X is disabled and all upcoming request are sent > to server Y instead. > My interpretation of the configuration was that the server X would still get > requests since it only had 1 failure (and it 3 as configured) during the > last 30 seconds? The intresing part is what happens _before_ "one server X receives a request...". Is it working normally and handles other requests? Or it was already considered dead and the request in question is one to check if it's alive? To illustrate, here is what happens with normally working server (one server on port 9999 is dead, and one at 8080 is responding normally, fail_timeout=30s, max_fails=3, ip_hash, just started nginx): 2012/11/16 20:23:29 [debug] 35083#0: *1 connect to 127.0.0.1:9999, fd:17 #2 2012/11/16 20:23:29 [debug] 35083#0: *1 connect to 127.0.0.1:8080, fd:17 #3 2012/11/16 20:23:29 [debug] 35083#0: *5 connect to 127.0.0.1:9999, fd:17 #6 2012/11/16 20:23:29 [debug] 35083#0: *5 connect to 127.0.0.1:8080, fd:17 #7 2012/11/16 20:23:30 [debug] 35083#0: *9 connect to 127.0.0.1:9999, fd:17 #10 2012/11/16 20:23:30 [debug] 35083#0: *9 connect to 127.0.0.1:8080, fd:17 #11 2012/11/16 20:23:31 [debug] 35083#0: *13 connect to 127.0.0.1:8080, fd:17 #14 2012/11/16 20:23:31 [debug] 35083#0: *16 connect to 127.0.0.1:8080, fd:17 #17 2012/11/16 20:23:32 [debug] 35083#0: *19 connect to 127.0.0.1:8080, fd:17 #20 2012/11/16 20:23:33 [debug] 35083#0: *22 connect to 127.0.0.1:8080, fd:17 #23 2012/11/16 20:23:34 [debug] 35083#0: *25 connect to 127.0.0.1:8080, fd:17 #26 2012/11/16 20:23:34 [debug] 35083#0: *28 connect to 127.0.0.1:8080, fd:17 #29 2012/11/16 20:23:35 [debug] 35083#0: *31 connect to 127.0.0.1:8080, fd:17 #32 As you can see, first 3 requests try to reach port 9999 - because of max_fails=3. On the other hand, as long as fail_timeout=30s passes, only one request try to reach 9999: 2012/11/16 20:24:37 [debug] 35083#0: *34 connect to 127.0.0.1:9999, fd:16 #35 2012/11/16 20:24:37 [debug] 35083#0: *34 connect to 127.0.0.1:8080, fd:16 #36 2012/11/16 20:24:38 [debug] 35083#0: *38 connect to 127.0.0.1:8080, fd:16 #39 2012/11/16 20:24:39 [debug] 35083#0: *41 connect to 127.0.0.1:8080, fd:16 #42 That's because situations of "normal working server" and "dead server we are trying to use again" are a bit different. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 16 16:42:54 2012 From: nginx-forum at nginx.us (pliljenberg) Date: Fri, 16 Nov 2012 11:42:54 -0500 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: <20121116163244.GA40452@mdounin.ru> References: <20121116163244.GA40452@mdounin.ru> Message-ID: <7aac262a0a56761c32a9c9d2d67c60a5.NginxMailingListEnglish@forum.nginx.org> The requests before (for more than 30sec) to the server X are ok, this is the diet request generating a 500 response (from the timeout). Son up til this point all looks good - which is why I don't understand why nginx considers the server inactive after the first fail :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232912,232919#msg-232919 From mdounin at mdounin.ru Fri Nov 16 17:11:47 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Nov 2012 21:11:47 +0400 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: <7aac262a0a56761c32a9c9d2d67c60a5.NginxMailingListEnglish@forum.nginx.org> References: <20121116163244.GA40452@mdounin.ru> <7aac262a0a56761c32a9c9d2d67c60a5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121116171146.GB40452@mdounin.ru> Hello! On Fri, Nov 16, 2012 at 11:42:54AM -0500, pliljenberg wrote: > The requests before (for more than 30sec) to the server X are ok, this is > the diet request generating a 500 response (from the timeout). > Son up til this point all looks good - which is why I don't understand why > nginx considers the server inactive after the first fail :) 500 response? Normally timeouts results in 504, and if you see 500 this might indicate that in fact request failed not due to a timeout, but e.g. due too loop detected. This in turn might mean that there were more than one request to the server X which failed. Try looking into error_log to see what's going on. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Fri Nov 16 17:23:42 2012 From: nginx-forum at nginx.us (gmccullough) Date: Fri, 16 Nov 2012 12:23:42 -0500 Subject: Adding cachekey to log_format directive In-Reply-To: <16655e3cf88aca050a05ec8f407c421d.NginxMailingListEnglish@forum.nginx.org> References: <16655e3cf88aca050a05ec8f407c421d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4417a5b6fd1cb6882a93dea456dd3179.NginxMailingListEnglish@forum.nginx.org> That depends on what you want... If you want the raw cache key, it is stored as an array of strings (r->cache->keys) that must be concatenated together to populate the variable to log. If you want the md5 hash of the raw cache key, it is available in two places: The binary md5 value in r->cache->key, which would require using ngx_hex_dump to make it into a hexadecimal string to log. A bit of a kludge, but the filename in the string r->cache->file.name contains it, but you'd have to backwards search it for the last '/' to get just the hash value. Graham McCullough Senior Software Engineer Internap Network Services Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232747,232921#msg-232921 From nginx-forum at nginx.us Fri Nov 16 18:51:26 2012 From: nginx-forum at nginx.us (pliljenberg) Date: Fri, 16 Nov 2012 13:51:26 -0500 Subject: Upstream max_fails, fail_timeout and proxy_read_timeout In-Reply-To: <20121116171146.GB40452@mdounin.ru> References: <20121116171146.GB40452@mdounin.ru> Message-ID: > Normally timeouts results in 504, and if you see 500 this might > indicate that in fact request failed not due to a timeout, but > e.g. due too loop detected. This in turn might mean that there > were more than one request to the server X which failed. > > Try looking into error_log to see what's going on. You're correct - its a 504. [16/Nov/2012:12:40:48 +0100] "POST /url HTTP/1.1" 403 454 Time: 300.030 Upstream-time: 300.004, 0.003 Upstream: XXX, YYY Upstream-status: 504, 403 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232912,232930#msg-232930 From nginx-forum at nginx.us Fri Nov 16 19:32:23 2012 From: nginx-forum at nginx.us (kalasnjikov) Date: Fri, 16 Nov 2012 14:32:23 -0500 Subject: http_flv_module not working, any idea please In-Reply-To: <20121115091329.GD40452@mdounin.ru> References: <20121115091329.GD40452@mdounin.ru> Message-ID: <7222ea6d60410a21474d17379b0e44eb.NginxMailingListEnglish@forum.nginx.org> Flv streaming still does not working What I doing wrong? server { listen xxx.xxx.xxx.xxx:80; error_log /var/log/nginx/hp8el1; access_log /var/log/nginx/hp8al1; server_name domain.com www.domain.com; client_max_body_size 1024m; root /home/domain/public_html; location / { location ~ \.flv$ { flv; } error_page 404 = @apache; error_page 403 = @apache; } location @apache { proxy_pass http://xxx.xxx.xxx.xxx:8081; } } Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,205650,232931#msg-232931 From joe at joeshaw.org Fri Nov 16 20:58:51 2012 From: joe at joeshaw.org (Joe Shaw) Date: Fri, 16 Nov 2012 15:58:51 -0500 Subject: client_max_body_size and 100 Continue/413 Request Entity Too Large Message-ID: Hi, I have a web app that accepts uploads from clients. We have a relatively high client_max_body_size set, and I'd like for clients to be able to be quickly rejected if they intend to upload files larger than the max body size. The "Expect: 100-continue" header seems ideally suited for this. However, when I try to upload a large file with curl (which uses the Expect header), nginx responds with "100 Continue" instead of "413 Request Entity Too Large": > POST /test HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5 > Host: example.com > Accept: */* > Content-Length: 454327718 > Content-Type: application/octet-stream > Expect: 100-continue > < HTTP/1.1 100 Continue < HTTP/1.1 413 Request Entity Too Large < Server: nginx/1.2.0 < Content-Type: text/html; charset=utf-8 < Date: Fri, 16 Nov 2012 20:40:24 GMT < Connection: Keep-Alive < Content-Length: 198 I would have expected nginx to return the 413 error instead of the 100 status code. As it is now, the client will continue to upload its data because it got the go ahead via the 100 status code. Is this a bug (or unimplemented feature) in nginx? Is there any way around this? As you can see from the response, I'm using nginx 1.2.0, which I realize isn't the latest, but I couldn't find anything related to this in the CHANGES file. Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From wing_pn at 163.com Sat Nov 17 03:58:49 2012 From: wing_pn at 163.com (azure peng) Date: Sat, 17 Nov 2012 11:58:49 +0800 (CST) Subject: How to turn on chunked_transfer_encoding for static files Message-ID: <11017326.33cd.13b0c84d3b5.Coremail.wing_pn@163.com> Hi, All, I use nginx 1.2.4 I want to use chunked_transfer_encoding sending to client for static files , but I found the "chunked_transfer_encoding" is not work for static files ( found code in ngx_http_chunked_header_filter() ) , I change the code for call ngx_http_clear_content_length() ) , and the chunked function works . BUT the nginx send a very big chunk (just equal to file size) , I want set the chunk size to 1MB , how to set it ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Nov 17 05:34:57 2012 From: nginx-forum at nginx.us (jcaleb) Date: Sat, 17 Nov 2012 00:34:57 -0500 Subject: Wordpress permalink and cache help Message-ID: Hello, I have a wordpress website and want to enable caching. My configuration below is working if no pretty url E.g. http://domain.com/?page_id=2 But when I use pretty url, the cache doesnt work: E.g. http://domain.com/sample-page But the pages displays correctly for both cases. Thank you
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:5m
max_size=1000m;

server {
    listen 80;
    server_name domain.com www.domain.com;

    access_log /var/log/nginx/website.access_log;
    error_log /var/log/nginx/website.error_log;

    root /home/jon/temp/php/domain.com;
    index index.php index.htm index.html;

    location ~ .php$ {
        set $no_cache "";
        if ($request_method !~ ^(GET|HEAD)$) {
            set $no_cache "1";
        }
        if ($no_cache = "1") {
            add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
            add_header X-Microcachable "0";
        }
        if ($http_cookie ~* "_mcnc") {
                    set $no_cache "1";
        }
        fastcgi_no_cache $no_cache;
        fastcgi_cache_bypass $no_cache;
        fastcgi_cache microcache;
        fastcgi_cache_key $server_name|$request_uri;
        fastcgi_cache_valid 404 30m;
        fastcgi_cache_valid 200 10s;
        fastcgi_max_temp_file_size 1M;
        fastcgi_cache_use_stale updating;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_pass_header Set-Cookie;
        fastcgi_pass_header Cookie;
        fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_param  PATH_INFO          $fastcgi_path_info;
        fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info;
        include fastcgi_params;
    }

    location ~ \.(js|css|ico|png|jpg|jpeg|gif|swf|xml|txt)$ {
	access_log off;
        expires 30d;
    }

    location ~*
\.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_
{
	return 444;
    }

    location ~ /\. {
	return 444;
	access_log off;
	log_not_found off;
    }

    location ~* \.(pl|cgi|py|sh|lua)\$ {
	return 444;
    }

    location ~* (roundcube|webdav|smtp|http\:|soap|w00tw00t) {
	return 444;
    }

    location / {
	try_files $uri $uri/ /index.php;
    }
}
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232940,232940#msg-232940 From ne at vbart.ru Sat Nov 17 14:31:21 2012 From: ne at vbart.ru (Valentin V. Bartenev) Date: Sat, 17 Nov 2012 18:31:21 +0400 Subject: How to turn on chunked_transfer_encoding for static files In-Reply-To: <11017326.33cd.13b0c84d3b5.Coremail.wing_pn@163.com> References: <11017326.33cd.13b0c84d3b5.Coremail.wing_pn@163.com> Message-ID: <201211171831.22031.ne@vbart.ru> On Saturday 17 November 2012 07:58:49 azure peng wrote: > Hi, All, > I use nginx 1.2.4 > I want to use chunked_transfer_encoding sending to client for > static files , but I found the "chunked_transfer_encoding" is not work > for static files ( found code in ngx_http_chunked_header_filter() ) , I > change the code for call ngx_http_clear_content_length() ) , and the > chunked function works . > [...] I'm really curious, why do you want this? wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Sat Nov 17 21:43:29 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 18 Nov 2012 01:43:29 +0400 Subject: client_max_body_size and 100 Continue/413 Request Entity Too Large In-Reply-To: References: Message-ID: <20121117214329.GJ40452@mdounin.ru> Hello! On Fri, Nov 16, 2012 at 03:58:51PM -0500, Joe Shaw wrote: > Hi, > > I have a web app that accepts uploads from clients. We have a relatively > high client_max_body_size set, and I'd like for clients to be able to be > quickly rejected if they intend to upload files larger than the max body > size. The "Expect: 100-continue" header seems ideally suited for this. > > However, when I try to upload a large file with curl (which uses the Expect > header), nginx responds with "100 Continue" instead of "413 Request Entity > Too Large": > > > POST /test HTTP/1.1 > > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 > OpenSSL/0.9.8r zlib/1.2.5 > > Host: example.com > > Accept: */* > > Content-Length: 454327718 > > Content-Type: application/octet-stream > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > < HTTP/1.1 413 Request Entity Too Large > < Server: nginx/1.2.0 > < Content-Type: text/html; charset=utf-8 > < Date: Fri, 16 Nov 2012 20:40:24 GMT > < Connection: Keep-Alive > < Content-Length: 198 > > I would have expected nginx to return the 413 error instead of the 100 > status code. As it is now, the client will continue to upload its data > because it got the go ahead via the 100 status code. > > Is this a bug (or unimplemented feature) in nginx? Is there any way around > this? As you can see from the response, I'm using nginx 1.2.0, which I > realize isn't the latest, but I couldn't find anything related to this in > the CHANGES file. As of now nginx is only capable of recognizing "Expect: 100-continue" and returning "100 Continue" response to avoid upload delays. It's not able to recognize it is going to return fatal error and isn't able to avoid sending "100 Continue" in this case. -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Sun Nov 18 07:26:44 2012 From: nginx-forum at nginx.us (eiji-gravion) Date: Sun, 18 Nov 2012 02:26:44 -0500 Subject: Advertise NPN without SPDY Message-ID: <5f545eebd059e00e1634f224a097668a.NginxMailingListEnglish@forum.nginx.org> Hello, Is there a way for nginx to advertise the NPN extension without the use of SPDY? I'm asking because Chrome disables SSL False Start by default, unless the NPN extension is advertised, and I don't want to use SPDY right now. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232960,232960#msg-232960 From wing_pn at 163.com Sun Nov 18 09:09:18 2012 From: wing_pn at 163.com (azure peng) Date: Sun, 18 Nov 2012 17:09:18 +0800 (CST) Subject: How to turn on chunked_transfer_encoding for static files In-Reply-To: <201211171831.22031.ne@vbart.ru> References: <11017326.33cd.13b0c84d3b5.Coremail.wing_pn@163.com> <201211171831.22031.ne@vbart.ru> Message-ID: <1c6f6b55.a7b7.13b12c77125.Coremail.wing_pn@163.com> Just for follow the customer's technique standard :< At 2012-11-17 22:31:21,"Valentin V. Bartenev" wrote: >On Saturday 17 November 2012 07:58:49 azure peng wrote: >> Hi, All, >> I use nginx 1.2.4 >> I want to use chunked_transfer_encoding sending to client for >> static files , but I found the "chunked_transfer_encoding" is not work >> for static files ( found code in ngx_http_chunked_header_filter() ) , I >> change the code for call ngx_http_clear_content_length() ) , and the >> chunked function works . >> [...] > >I'm really curious, why do you want this? > > wbr, Valentin V. Bartenev > >-- >http://nginx.com/support.html >http://nginx.org/en/donation.html > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sun Nov 18 10:03:19 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 18 Nov 2012 14:03:19 +0400 Subject: client_max_body_size and 100 Continue/413 Request Entity Too Large In-Reply-To: References: Message-ID: <32F593BD-8E8B-4822-8E7C-BA80BB3038B2@sysoev.ru> On Nov 17, 2012, at 0:58 , Joe Shaw wrote: > Hi, > > I have a web app that accepts uploads from clients. We have a relatively high client_max_body_size set, and I'd like for clients to be able to be quickly rejected if they intend to upload files larger than the max body size. The "Expect: 100-continue" header seems ideally suited for this. > > However, when I try to upload a large file with curl (which uses the Expect header), nginx responds with "100 Continue" instead of "413 Request Entity Too Large": > > > POST /test HTTP/1.1 > > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5 > > Host: example.com > > Accept: */* > > Content-Length: 454327718 > > Content-Type: application/octet-stream > > Expect: 100-continue > > > < HTTP/1.1 100 Continue > < HTTP/1.1 413 Request Entity Too Large > < Server: nginx/1.2.0 > < Content-Type: text/html; charset=utf-8 > < Date: Fri, 16 Nov 2012 20:40:24 GMT > < Connection: Keep-Alive > < Content-Length: 198 > > I would have expected nginx to return the 413 error instead of the 100 status code. As it is now, the client will continue to upload its data because it got the go ahead via the 100 status code. > > Is this a bug (or unimplemented feature) in nginx? Is there any way around this? As you can see from the response, I'm using nginx 1.2.0, which I realize isn't the latest, but I couldn't find anything related to this in the CHANGES file. The attached patch should fix the bug. -- Igor Sysoev http://nginx.com/support.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: patch.expect.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nielson.rolim at gmail.com Sun Nov 18 11:20:09 2012 From: nielson.rolim at gmail.com (Nielson Rolim) Date: Sun, 18 Nov 2012 08:20:09 -0300 Subject: Help do translate Apache Rewrite Rules Message-ID: Hi, I'd like to ask for some help to translate these Apache Rules to Nginx: RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{QUERY_STRING} (.*) RewriteRule ^(.*)$ index.php?url=$1&%1 These rules redirects all the requests to index.php. For example: http://www.example.com/users/edit/id/1?profile=1 Redirects to: http://www.example.com/index.php?url=users/edit/id/1&profile=1 To be honest, I know what these rules do, but I don't know exactly how they work, so any help will be very appreciated. Thanks in advance, -- Nielson Rolim nielson.rolim at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Sun Nov 18 12:00:43 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 18 Nov 2012 19:00:43 +0700 Subject: Help do translate Apache Rewrite Rules In-Reply-To: References: Message-ID: On Sun, Nov 18, 2012 at 6:20 PM, Nielson Rolim wrote: > Hi, > > I'd like to ask for some help to translate these Apache Rules to Nginx: > > RewriteCond %{SCRIPT_FILENAME} !-f > RewriteCond %{SCRIPT_FILENAME} !-d > RewriteCond %{QUERY_STRING} (.*) > RewriteRule ^(.*)$ index.php?url=$1&%1 > > > These rules redirects all the requests to index.php. For example: > > http://www.example.com/users/edit/id/1?profile=1 > > Redirects to: > > http://www.example.com/index.php?url=users/edit/id/1&profile=1 > > To be honest, I know what these rules do, but I don't know exactly how they > work, so any help will be very appreciated. > > try_files $uri $uri/ /index.php?url=$uri&$args; From nginx-forum at nginx.us Sun Nov 18 13:25:15 2012 From: nginx-forum at nginx.us (amodpandey) Date: Sun, 18 Nov 2012 08:25:15 -0500 Subject: trailing slash in location Message-ID: <2dc5f7e946354a630b5aed17f63bd32d.NginxMailingListEnglish@forum.nginx.org> Please help me understand The below works location /stats/ { proxy_pass http://example.com; } or location /stats { proxy_pass http://example.com; } or location /stats { proxy_pass http://example.com/stats; } or location /stats { proxy_pass http://example.com/stats/; } or location /stats/ { proxy_pass http://example.com/stats/; } But this does not work location /stats/ { proxy_pass http://example.com/stats; } Smlly when stats is an upstream This works location /stats { proxy_pass http://stats; } but this does not location /stats { proxy_pass http://stats/; } What difference it makes when we have uri in the proxy_pass? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232966,232966#msg-232966 From nielson.rolim at gmail.com Sun Nov 18 15:04:05 2012 From: nielson.rolim at gmail.com (Nielson Rolim) Date: Sun, 18 Nov 2012 12:04:05 -0300 Subject: Help do translate Apache Rewrite Rules In-Reply-To: References: Message-ID: Thank you Edho! On Sun, Nov 18, 2012 at 9:00 AM, Edho Arief wrote: > On Sun, Nov 18, 2012 at 6:20 PM, Nielson Rolim > wrote: > > Hi, > > > > I'd like to ask for some help to translate these Apache Rules to Nginx: > > > > RewriteCond %{SCRIPT_FILENAME} !-f > > RewriteCond %{SCRIPT_FILENAME} !-d > > RewriteCond %{QUERY_STRING} (.*) > > RewriteRule ^(.*)$ index.php?url=$1&%1 > > > > > > These rules redirects all the requests to index.php. For example: > > > > http://www.example.com/users/edit/id/1?profile=1 > > > > Redirects to: > > > > http://www.example.com/index.php?url=users/edit/id/1&profile=1 > > > > To be honest, I know what these rules do, but I don't know exactly how > they > > work, so any help will be very appreciated. > > > > > > try_files $uri $uri/ /index.php?url=$uri&$args; > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Nielson Rolim nielson.rolim at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Sun Nov 18 15:09:22 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 18 Nov 2012 19:09:22 +0400 Subject: trailing slash in location In-Reply-To: <2dc5f7e946354a630b5aed17f63bd32d.NginxMailingListEnglish@forum.nginx.org> References: <2dc5f7e946354a630b5aed17f63bd32d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <302CB2F8-EF55-463D-9352-EF78325CD8F5@sysoev.ru> On Nov 18, 2012, at 17:25 , amodpandey wrote: > Please help me understand > > The below works > > location /stats/ { > proxy_pass http://example.com; > } > > or > > location /stats { > proxy_pass http://example.com; > } > > or > > location /stats { > proxy_pass http://example.com/stats; > } > > or > > location /stats { > proxy_pass http://example.com/stats/; > } > > or > > location /stats/ { > proxy_pass http://example.com/stats/; > } > > But this does not work > > location /stats/ { > proxy_pass http://example.com/stats; > } > > Smlly when stats is an upstream > > This works > > location /stats { > proxy_pass http://stats; > } > > but this does not > > location /stats { > proxy_pass http://stats/; > } It should work. Probably "/stats/" > "/stats" does not work. > What difference it makes when we have uri in the proxy_pass? It does not work because nginx changes /stats/SOME/PAGE to /statsSOME/PAGE. Please read for details: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass -- Igor Sysoev http://nginx.com/support.html From andrew at nginx.com Mon Nov 19 07:14:31 2012 From: andrew at nginx.com (Andrew Alexeev) Date: Mon, 19 Nov 2012 11:14:31 +0400 Subject: http_flv_module not working, any idea please In-Reply-To: <7222ea6d60410a21474d17379b0e44eb.NginxMailingListEnglish@forum.nginx.org> References: <20121115091329.GD40452@mdounin.ru> <7222ea6d60410a21474d17379b0e44eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <52BBD43E-DB08-4BE2-8695-416BCC454BDA@nginx.com> On Nov 16, 2012, at 11:32 PM, kalasnjikov wrote: > Flv streaming still does not working > What I doing wrong? > server { > listen xxx.xxx.xxx.xxx:80; > error_log /var/log/nginx/hp8el1; > access_log /var/log/nginx/hp8al1; > server_name domain.com www.domain.com; > client_max_body_size 1024m; > root /home/domain/public_html; > > > location / { > location ~ \.flv$ { flv; } > error_page 404 = @apache; > error_page 403 = @apache; > } > > location @apache { > proxy_pass http://xxx.xxx.xxx.xxx:8081; > } > > } What exactly is not working? :) Configuration seems to be valid, flv videos existing on the local storage should be pseudo-streamed by nginx just fine. If you mean pseudo-streaming does not work for proxied requests, then apparently something's wrong with the Apache configuration. Do you have flv pseudo-streaming in Apache as well? Cheers From nginx-forum at nginx.us Mon Nov 19 08:24:10 2012 From: nginx-forum at nginx.us (goelvivek) Date: Mon, 19 Nov 2012 03:24:10 -0500 Subject: When nginx will send Connection timed out error ? Message-ID: I am using nginx as proxy server. I got following error once: upstream timed out (110: Connection timed out) while connecting to upstream, In what situtaion I will get this error? Exaple: 1. Upstream didn't accept connection within x time. Is there any other case when I can get this error or is it only one case when I will get this error? Example: 1. Saving or reading to tmp file buffer was slow. 2. Nginx was running out of connection? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232970,232970#msg-232970 From pinakee at vvidiacom.com Mon Nov 19 08:52:55 2012 From: pinakee at vvidiacom.com (Pinakee Biswas) Date: Mon, 19 Nov 2012 14:22:55 +0530 Subject: As Proxy Message-ID: <003901cdc633$46e44850$d4acd8f0$@vvidiacom.com> Hi, We are trying to use nginx as a proxy. We have a pylons framework for our application which uses paster to deliver the resources. PFA the configuration we have for nginx. Somehow the css files and images are not getting delivered. We have tried the following mechanisms: 1. Configuring Nginx as a pure proxy where in all the resources would be delivered by paster. 2. Configuring Nginx such that nginx delivers the static resources whereas the rest are delivered by paste. Both are not working for us. Somehow the static resources (like css, images) are not getting delivered. We were using Apache earlier where the option 1 (as mentioned above was working fine). We are new to nginx. We would really appreciate if you could please let us know what we are doing wrong and what the reason for the above could be. Looking forward to your response and help. Thanks, Pinakee Biswas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 3888 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Nov 19 09:35:36 2012 From: nginx-forum at nginx.us (arnoldguo) Date: Mon, 19 Nov 2012 04:35:36 -0500 Subject: Caucho Resin: faster than nginx? In-Reply-To: References: Message-ID: I use Xeon E5 32core CPU with 10G NIC, for empty page,get nearly 400-500k rps on nginx 1.2.4, How to get 700k rps or more(1000k rps)? Arnold Liu Lantao Wrote: ------------------------------------------------------- > We are making a nginx benchmark under 10Gbe network. For an empty > page, we > get about 700k rps of nginx, in compare with about 100k rps of resin > pro. > > In caucho's test, they use i7 4 core / 8 HT, 2.8 GHZ, 8Meg Cache, 8 GB > RAM, > and I use duo intel e5645. I think the result can be improved through > some > tuning. > > We tuned server configuration and nginx configuration, but didn't tune > much > on resin. We didn't find any configuration of caucho's testing, > neither > nginx nor resin. so i wonder how to make the rps of resin go above > 100k? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,229872,232972#msg-232972 From nginx-forum at nginx.us Mon Nov 19 09:37:30 2012 From: nginx-forum at nginx.us (goelvivek) Date: Mon, 19 Nov 2012 04:37:30 -0500 Subject: Does log phase counts as active connection? Message-ID: <2e36acae1581f02579f358f47b8f3b06.NginxMailingListEnglish@forum.nginx.org> Hi, Assume my server datastorage is not responding where I am logging access.log/error.log and some analytics logs using lua. If I am running nginx as single worker and I have set a limit of 1024 max connections in nginx and 1000 connections are waiting in log phase. How many connection nginx can accept at that time? 24 more or 1024 more? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232973,232973#msg-232973 From vbart at nginx.com Mon Nov 19 10:50:36 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 19 Nov 2012 14:50:36 +0400 Subject: Advertise NPN without SPDY In-Reply-To: <5f545eebd059e00e1634f224a097668a.NginxMailingListEnglish@forum.nginx.org> References: <5f545eebd059e00e1634f224a097668a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211191450.36759.vbart@nginx.com> On Sunday 18 November 2012 11:26:44 eiji-gravion wrote: > Hello, > > Is there a way for nginx to advertise the NPN extension without the use of > SPDY? Currently no, there is not. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From ian.hobson at ntlworld.com Mon Nov 19 16:02:56 2012 From: ian.hobson at ntlworld.com (Ian Hobson) Date: Mon, 19 Nov 2012 16:02:56 +0000 Subject: Configuring nginx for white label Message-ID: <50AA5830.6020802@ntlworld.com> Hi all, I am trying to configure a "white-label" set up where the reseller supplies only the files he needs to over-ride the basic installation - a few images, perhaps a css file and an occasional .php file. All files he does NOT supply should be served from a master location. Say master is hosted in /var/www/master/htdocs and reseller is hosted in /var/www/reseller/htdocs I have tried try_files $uri, $uri/ ../../master/htdocs/$uri ../../master/htdocs/$uri/ =404; This was attractive because I only named the reseller once in the server line. Sadly it doesn't pick up the master files even though all directories have read permissions. I have tried jumping locations..... root /var/www/reseller/htdocs; try_files $uri $uri/ @master location @master { root /var/www/master/htdocs; try_files $uri $uri/ @reseller; } location @reseller { root /var/www/reseller_X/htdocs; try_files /index.php?$args; } Oh so near. This served static master files OK, but not master php files! I never got index.php called. Missing files and master php files produced "No input file specified." What is the correct approach for what I want to do? Thanks Ian -- Ian Hobson 31 Sheerwater, Northampton NN3 5HU, Tel: 01604 513875 Preparing eBooks for Kindle and ePub formats to give the best reader experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.tourne at gmail.com Mon Nov 19 17:49:34 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Mon, 19 Nov 2012 09:49:34 -0800 Subject: Advertise NPN without SPDY In-Reply-To: <201211191450.36759.vbart@nginx.com> References: <5f545eebd059e00e1634f224a097668a.NginxMailingListEnglish@forum.nginx.org> <201211191450.36759.vbart@nginx.com> Message-ID: Hello, On Mon, Nov 19, 2012 at 2:50 AM, Valentin V. Bartenev wrote: > On Sunday 18 November 2012 11:26:44 eiji-gravion wrote: > > Hello, > > > > Is there a way for nginx to advertise the NPN extension without the use > of > > SPDY? > I think you can do this fairly easily, mostly using nginx conf and eventually 3rd party modules. You could use headers_more[1] to always return the Alternate Protocol headers. Or this Lua snippet [2] (see ngx_lua module [3]), which will only adverstise SPDY if a client is not already using it. Happy hacking, Matthieu. [1] http://wiki.nginx.org/HttpHeadersMoreModule [2] https://gist.github.com/4112241 [3] http://wiki.nginx.org/HttpLuaModule -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.tourne at gmail.com Mon Nov 19 17:55:37 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Mon, 19 Nov 2012 09:55:37 -0800 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: <201211122226.43165.vbart@nginx.com> Message-ID: Hello, On Tue, Nov 13, 2012 at 8:40 AM, CM Fields wrote: > Valentin, > > Thanks for the patch. I put the new code in place this morning. The server > will need to run for a few days up to a week before I might see > the possibility of a lingering socket from a bad client. I will report what > I find. > > Thank you very much. > > > On Mon, Nov 12, 2012 at 1:26 PM, Valentin V. Bartenev wrote: > >> On Friday 09 November 2012 22:08:47 CM Fields wrote: >> > [...] >> > I just wanted to report this issue in case someone else had the same >> > problem. I wish I had more information, but at this time I am not sure >> what >> > the client is sending to cause the hanging open sockets. If there is any >> > other information that will help or if a new patch needs testing please >> > tell me. >> > >> > Have a great weekend! >> >> Hello, thank you for the report. >> >> Could you please test the new revision of spdy patch: >> http://nginx.org/patches/spdy/patch.spdy-53.txt ? >> >> Any feedback from the latest spdy patch ? Valentin, could you drop a line on what changed since patch 52 ? Thank you, Matthieu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmfileds at gmail.com Mon Nov 19 18:22:43 2012 From: cmfileds at gmail.com (CM Fields) Date: Mon, 19 Nov 2012 13:22:43 -0500 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: <201211122226.43165.vbart@nginx.com> Message-ID: Matthieu, The SPDY nginx patch has been in place for almost a week now with over 310,000 public connections. I have not seen any issues with CLOSE_WAIT states at all and the server has been perfectly stable. The patch works great. For reference, this is the source build we are using: Nginx 1.3.8 OpenSSL 1.0.1c SPDY patch.spdy-53.txt OpenBSD v5.2 (default install) and also FreeBSD 9.1-RC3 (default install) On Mon, Nov 19, 2012 at 12:55 PM, Matthieu Tourne wrote: > Hello, > > > On Tue, Nov 13, 2012 at 8:40 AM, CM Fields wrote: > >> Valentin, >> >> Thanks for the patch. I put the new code in place this morning. The >> server will need to run for a few days up to a week before I might see >> the possibility of a lingering socket from a bad client. I will report what >> I find. >> >> Thank you very much. >> >> >> On Mon, Nov 12, 2012 at 1:26 PM, Valentin V. Bartenev wrote: >> >>> On Friday 09 November 2012 22:08:47 CM Fields wrote: >>> > [...] >>> > I just wanted to report this issue in case someone else had the same >>> > problem. I wish I had more information, but at this time I am not sure >>> what >>> > the client is sending to cause the hanging open sockets. If there is >>> any >>> > other information that will help or if a new patch needs testing please >>> > tell me. >>> > >>> > Have a great weekend! >>> >>> Hello, thank you for the report. >>> >>> Could you please test the new revision of spdy patch: >>> http://nginx.org/patches/spdy/patch.spdy-53.txt ? >>> >>> > Any feedback from the latest spdy patch ? > > Valentin, could you drop a line on what changed since patch 52 ? > > Thank you, > Matthieu. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Nov 19 18:43:46 2012 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 19 Nov 2012 19:43:46 +0100 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: , <201211122226.43165.vbart@nginx.com>, , Message-ID: > Any feedback from the latest spdy patch ? > > Valentin, could you drop a line on what changed since patch 52 ? >From http://nginx.org/patches/spdy/CHANGES.txt 2012-11-12 Version 53 - The headers compression is switched off by default (to avoid the possibility of the CRIME attack) - Fixed support for little-endian ARM (as well as all little-endian platforms with strict alignment requirements) - Fixed possible memory and socket leak Regards, Lukas From matthieu.tourne at gmail.com Mon Nov 19 19:11:20 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Mon, 19 Nov 2012 11:11:20 -0800 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: <201211122226.43165.vbart@nginx.com> Message-ID: On Mon, Nov 19, 2012 at 10:43 AM, Lukas Tribus wrote: > > > Any feedback from the latest spdy patch ? > > > > Valentin, could you drop a line on what changed since patch 52 ? > > > From http://nginx.org/patches/spdy/CHANGES.txt > 2012-11-12 Version 53 > - The headers compression is switched off by default (to avoid the > possibility > of the CRIME attack) > - Fixed support for little-endian ARM (as well as all little-endian > platforms > with strict alignment requirements) > - Fixed possible memory and socket leak > > Perfect, thank you! I will apply the patch as well. So far we haven't seen much issues using patch 52, doing mostly a SPDY / SSL termination as early as possible and not that much more logic. Matthieu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Nov 19 19:24:36 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 19 Nov 2012 23:24:36 +0400 Subject: SPDY sockets staying open indefinitely In-Reply-To: References: Message-ID: <201211192324.36129.vbart@nginx.com> On Monday 19 November 2012 23:11:20 Matthieu Tourne wrote: [...] > > Perfect, thank you! > > I will apply the patch as well. > So far we haven't seen much issues using patch 52, > doing mostly a SPDY / SSL termination as early as possible and not that > much more logic. > Feel free to try it. It's just a slightly fixed version of spdy 52. The goal is to provide stable version while I'm working on a better implementation. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From piotr.sikora at frickle.com Mon Nov 19 19:47:53 2012 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Mon, 19 Nov 2012 20:47:53 +0100 Subject: Advertise NPN without SPDY In-Reply-To: References: <5f545eebd059e00e1634f224a097668a.NginxMailingListEnglish@forum.nginx.org> <201211191450.36759.vbart@nginx.com> Message-ID: Hello, > I think you can do this fairly easily, mostly using nginx conf and > eventually 3rd party modules. You can't - NPN is advertised during SSL handshake. Best regards, Piotr Sikora < piotr.sikora at frickle.com > From matthieu.tourne at gmail.com Mon Nov 19 20:43:26 2012 From: matthieu.tourne at gmail.com (Matthieu Tourne) Date: Mon, 19 Nov 2012 12:43:26 -0800 Subject: SPDY sockets staying open indefinitely In-Reply-To: <201211192324.36129.vbart@nginx.com> References: <201211192324.36129.vbart@nginx.com> Message-ID: On Mon, Nov 19, 2012 at 11:24 AM, Valentin V. Bartenev wrote: > On Monday 19 November 2012 23:11:20 Matthieu Tourne wrote: > [...] > > > > Perfect, thank you! > > > > I will apply the patch as well. > > So far we haven't seen much issues using patch 52, > > doing mostly a SPDY / SSL termination as early as possible and not that > > much more logic. > > > > Feel free to try it. It's just a slightly fixed version of spdy 52. > > The goal is to provide stable version while I'm working on a better > implementation. > > Thank you for the heads up Valentin, For those who were wondering, here is the delta between spdy-52 and spdy-53, if my git-foo worked correctly : https://gist.github.com/4113700 Regards, Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 20 02:54:17 2012 From: nginx-forum at nginx.us (bach) Date: Mon, 19 Nov 2012 21:54:17 -0500 Subject: json files download In-Reply-To: <6e6372aec1d04a75a563db22d7507220.NginxMailingListEnglish@forum.nginx.org> References: <6e6372aec1d04a75a563db22d7507220.NginxMailingListEnglish@forum.nginx.org> Message-ID: I am having that exact same problem. I've update mime.types config file added application/json json; and restart Nginx a zillion times but still nothing changed! is there any extra step I should do? does that change take time, because of Nginx caching? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227766,232991#msg-232991 From jdorfman at netdna.com Tue Nov 20 03:30:26 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 19 Nov 2012 19:30:26 -0800 Subject: json files download In-Reply-To: References: <6e6372aec1d04a75a563db22d7507220.NginxMailingListEnglish@forum.nginx.org> Message-ID: @bach it could be a content-disposition header (attachment) being added. Can you paste your nginx.conf/vhost file? Regards, Justin Dorfman NetDNA ? The Science of Acceleration? On Mon, Nov 19, 2012 at 6:54 PM, bach wrote: > I am having that exact same problem. > > I've update mime.types config file > added > application/json json; > > and restart Nginx a zillion times but still nothing changed! > > is there any extra step I should do? > does that change take time, because of Nginx caching? > > thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227766,232991#msg-232991 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 20 03:32:37 2012 From: nginx-forum at nginx.us (jansegre) Date: Mon, 19 Nov 2012 22:32:37 -0500 Subject: Proxying when a cookie is present Message-ID: <071436da7eee0a2ea7ee4e5eb158bcc0.NginxMailingListEnglish@forum.nginx.org> I'm trying to configure nginx to proxy_pass to a hot-test server when a certain cookie is present. It's expected to work like this: - accessing /hot will set a cookie hot=1 and redirect to / - accessing /prod will set the cookie hot=0 and redirect to / - when there is this cookie hot=1 then the requests should be proxied to the test server - otherwise regular static and other proxies will take place What I couldn't do so far: - an if that has higher priority over a location so static files won't try to be served before the proxy to the test takes place I managed to make this work doing an internal proxy to isolate my configurations on different server directives more a less like this: [code] server { listen 80; server_name myserver.com _; location = /hot { add_header Set-Cookie "hot=1;Max-Age=3600"; rewrite ^ / redirect; } location = /prod { add_header Set-Cookie "hot=0;Max-Age=3600"; rewrite ^ / redirect; } location ^~ /somepage/ { if ($http_cookie ~* "hot=1") { proxy_pass http://mytestserver.com; break; } # this is where the hacking begins proxy_set_header X-Real-Host $host; proxy_set_header Host somepage; proxy_pass http://localhost:666; } } server { listen 666; server_name somepage; allow 127.0.0.1; deny all; location ^~ /somepage/ { # here I can peacefully write what I needed before # but when I try to do so it seems to have priority over the if directive location ~* /somepage/(.+)\.(css|js)$ { expires 1h; alias /path/to/files/$1.$2; } location ~* /somepage/(.+)\.(jpg|jpeg|png|gif|bmp|ico|pdf|flv|swf|woff)$ { expires 7d; alias /path/to/files/$1.$2; } proxy_set_header Host $http_x_real_host; proxy_pass http://myjavaloadbalancer.com:8080; } } [/code] That's basically it, there are other locations and details like access_log, but I think the above is the relevant part. I'd really appretiate a better way to do this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232993,232993#msg-232993 From nginx-forum at nginx.us Tue Nov 20 03:48:44 2012 From: nginx-forum at nginx.us (bach) Date: Mon, 19 Nov 2012 22:48:44 -0500 Subject: json files download In-Reply-To: References: Message-ID: <0736a899072f91e6865439881dac268e.NginxMailingListEnglish@forum.nginx.org> Thanks Justin, here's how my config looks like. Please note the 2 Pyramid apps at the bottom, from which I am trying to server the json file. Anything obviously wrong with that setup that could be affecting the header? thanks ======================== user www www; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/db/nginx/nginx.pid; events { # After increasing this value You probably should increase limit # of file descriptors (for example in start_precmd in startup script) worker_connections 1024; } http { include /opt/local/etc/nginx/mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root share/examples/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root share/examples/nginx/html; } } #first pyramid App - App1 server { listen 80; server_name app1.com www.app1.com; access_log /home/app1.com/web/env/access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://127.0.0.1:5000; proxy_redirect off; } } # my second pyramid App - App2 server { listen 80; server_name app2.com www.app2.com; access_log /home/app2.com/logs/access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://127.0.0.1:5003; proxy_redirect off; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227766,232994#msg-232994 From jdorfman at netdna.com Tue Nov 20 04:08:46 2012 From: jdorfman at netdna.com (Justin Dorfman) Date: Mon, 19 Nov 2012 20:08:46 -0800 Subject: json files download In-Reply-To: <0736a899072f91e6865439881dac268e.NginxMailingListEnglish@forum.nginx.org> References: <0736a899072f91e6865439881dac268e.NginxMailingListEnglish@forum.nginx.org> Message-ID: @bach who is the owner & what permissions are on this file?: /opt/local/etc/nginx/mime.types; please run: ls -alh /opt/local/etc/nginx/mime.types Also can you post your mime.types on pastebin or gist? Regards, Justin Dorfman NetDNA ? The Science of Acceleration? Email / gtalk: jdorfman at netdna.com M: 818.485.1458 Skype: netdna-justind Twitter: @jdorfman www.NetDNA.com | www.MaxCDN.com @NetDNA | @MaxCDN On Mon, Nov 19, 2012 at 7:48 PM, bach wrote: > Thanks Justin, > > here's how my config looks like. Please note the 2 Pyramid apps at the > bottom, from which I am trying to server the json file. > Anything obviously wrong with that setup that could be affecting the > header? > > thanks > > ======================== > user www www; > worker_processes 1; > > #error_log /var/log/nginx/error.log; > #error_log /var/log/nginx/error.log notice; > #error_log /var/log/nginx/error.log info; > > #pid /var/db/nginx/nginx.pid; > > > events { > # After increasing this value You probably should increase limit > # of file descriptors (for example in start_precmd in startup script) > worker_connections 1024; > } > > > http { > include /opt/local/etc/nginx/mime.types; > default_type application/octet-stream; > > #log_format main '$remote_addr - $remote_user [$time_local] > "$request" > ' > # '$status $body_bytes_sent "$http_referer" ' > # '"$http_user_agent" "$http_x_forwarded_for"'; > > #access_log /var/log/nginx/access.log main; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > #gzip on; > > server { > listen 80; > server_name localhost; > > #charset koi8-r; > > #access_log /var/log/nginx/host.access.log main; > > location / { > root share/examples/nginx/html; > index index.html index.htm; > } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html > # > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root share/examples/nginx/html; > } > } > > #first pyramid App - App1 > server { > listen 80; > server_name app1.com www.app1.com; > > access_log /home/app1.com/web/env/access.log; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > > client_max_body_size 10m; > client_body_buffer_size 128k; > proxy_connect_timeout 60s; > proxy_send_timeout 90s; > proxy_read_timeout 90s; > proxy_buffering off; > proxy_temp_file_write_size 64k; > proxy_pass http://127.0.0.1:5000; > proxy_redirect off; > } > } > > # my second pyramid App - App2 > server { > listen 80; > server_name app2.com www.app2.com; > > access_log /home/app2.com/logs/access.log; > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > > client_max_body_size 10m; > client_body_buffer_size 128k; > proxy_connect_timeout 60s; > proxy_send_timeout 90s; > proxy_read_timeout 90s; > proxy_buffering off; > proxy_temp_file_write_size 64k; > proxy_pass http://127.0.0.1:5003; > proxy_redirect off; > } > } > } > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,227766,232994#msg-232994 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 20 04:16:12 2012 From: nginx-forum at nginx.us (bach) Date: Mon, 19 Nov 2012 23:16:12 -0500 Subject: json files download In-Reply-To: References: Message-ID: <19f6fb69d50e11459a9a54670cee4042.NginxMailingListEnglish@forum.nginx.org> sure this is the mime.types file https://gist.github.com/4115943 and persmissions # ls -alh /opt/local/etc/nginx/mime.types -rw-r--r-- 1 root root 3.2K Nov 20 02:43 /opt/local/etc/nginx/mime.types thanks mate for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227766,232996#msg-232996 From edigarov at qarea.com Tue Nov 20 11:00:29 2012 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 20 Nov 2012 13:00:29 +0200 Subject: server_name Message-ID: <50AB62CD.1050803@qarea.com> ?????? ????, ??????? ?????? : server { listen 80; server_name ~^(.*).site.com$; location / { add_before_body /Header.html; add_after_body /Footer.html; autoindex on; autoindex_exact_size off; try_files /subdoms/$1 @fallback; } location @fallback { root /site.com/; } } ?? ????? ?????, ??? ?? ?????-?? ??????? ????????? ? server_name ??????? ?? ??????, ??? ????????? ?? ????? ?????? ??? ?????? ?? fallback. ??????????, ??? ????????? ???????? ?????? ?????????. ???????. -- With best regards, Gregory Edigarov From citrin at citrin.ru Tue Nov 20 11:09:32 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 20 Nov 2012 15:09:32 +0400 Subject: server_name In-Reply-To: <50AB62CD.1050803@qarea.com> References: <50AB62CD.1050803@qarea.com> Message-ID: <50AB64EC.3010200@citrin.ru> On 11/20/12 15:00, Gregory Edigarov wrote: > > > server_name ~^(.*).site.com$; ????? ??? ??????????, ? ?? ????? ????????????: server_name ~^(.*)\.site\.com$; ? ??????? ???????????? named captures: server_name ~^(?.*)\.site\.com$; ... try_files /subdoms/$subdom @fallback; -- Anton Yuzhaninov From edigarov at qarea.com Tue Nov 20 11:31:51 2012 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 20 Nov 2012 13:31:51 +0200 Subject: server_name In-Reply-To: <50AB64EC.3010200@citrin.ru> References: <50AB62CD.1050803@qarea.com> <50AB64EC.3010200@citrin.ru> Message-ID: <50AB6A27.40403@qarea.com> On 11/20/2012 01:09 PM, Anton Yuzhaninov wrote: > On 11/20/12 15:00, Gregory Edigarov wrote: >> >> >> server_name ~^(.*).site.com$; > > ????? ??? ??????????, ? ?? ????? ????????????: > > server_name ~^(.*)\.site\.com$; > > ? ??????? ???????????? named captures: > > server_name ~^(?.*)\.site\.com$; > ... > try_files /subdoms/$subdom @fallback; > ???????, ?? ??????-?? ?? ????????. ??? ?????? ?? fallback. /subdoms/$subdom - ???? ?? ???? ???????? ?????????????, ????? ??? ???? ????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Tue Nov 20 11:35:52 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 20 Nov 2012 15:35:52 +0400 Subject: server_name In-Reply-To: <50AB6A27.40403@qarea.com> References: <50AB62CD.1050803@qarea.com> <50AB64EC.3010200@citrin.ru> <50AB6A27.40403@qarea.com> Message-ID: <50AB6B18.7010004@citrin.ru> On 11/20/12 15:31, Gregory Edigarov wrote: >> >> try_files /subdoms/$subdom @fallback; >> > ???????, ?? ??????-?? ?? ????????. > ??? ?????? ?? fallback. > /subdoms/$subdom - ???? ?? ???? ???????? ?????????????, ????? ??? ???? ????????? ??? ???????? try_file ??????? ??? ?? ???: root /subdoms/$subdom; ... try_file $uri $uri/ @fallback; -- Anton Yuzhaninov From edigarov at qarea.com Tue Nov 20 11:46:19 2012 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 20 Nov 2012 13:46:19 +0200 Subject: server_name In-Reply-To: <50AB6B18.7010004@citrin.ru> References: <50AB62CD.1050803@qarea.com> <50AB64EC.3010200@citrin.ru> <50AB6A27.40403@qarea.com> <50AB6B18.7010004@citrin.ru> Message-ID: <50AB6D8B.2090200@qarea.com> On 11/20/2012 01:35 PM, Anton Yuzhaninov wrote: > On 11/20/12 15:31, Gregory Edigarov wrote: >>> >>> try_files /subdoms/$subdom @fallback; >>> >> ???????, ?? ??????-?? ?? ????????. >> ??? ?????? ?? fallback. >> /subdoms/$subdom - ???? ?? ???? ???????? ?????????????, ????? ??? >> ???? ????????? > > ??? ???????? try_file ??????? ??? ?? ???: > > root /subdoms/$subdom; > ... > try_file $uri $uri/ @fallback; > ???-????? ?? ?????. ?????? $subdom ???? ??????. ?? ?????????? From edigarov at qarea.com Tue Nov 20 11:48:59 2012 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 20 Nov 2012 13:48:59 +0200 Subject: server_name In-Reply-To: <50AB6D8B.2090200@qarea.com> References: <50AB62CD.1050803@qarea.com> <50AB64EC.3010200@citrin.ru> <50AB6A27.40403@qarea.com> <50AB6B18.7010004@citrin.ru> <50AB6D8B.2090200@qarea.com> Message-ID: <50AB6E2B.8000408@qarea.com> On 11/20/2012 01:46 PM, Gregory Edigarov wrote: > On 11/20/2012 01:35 PM, Anton Yuzhaninov wrote: >> On 11/20/12 15:31, Gregory Edigarov wrote: >>>> >>>> try_files /subdoms/$subdom @fallback; >>>> >>> ???????, ?? ??????-?? ?? ????????. >>> ??? ?????? ?? fallback. >>> /subdoms/$subdom - ???? ?? ???? ???????? ?????????????, ????? ??? >>> ???? ????????? >> >> ??? ???????? try_file ??????? ??? ?? ???: >> >> root /subdoms/$subdom; >> ... >> try_file $uri $uri/ @fallback; >> > ???-????? ?? ?????. ?????? $subdom ???? ??????. ?? ?????????? ? ????????? /subdoms/$subdom - 100% ?????????? ? ???????? ???????????? From nginx-forum at nginx.us Tue Nov 20 11:49:53 2012 From: nginx-forum at nginx.us (nackgr) Date: Tue, 20 Nov 2012 06:49:53 -0500 Subject: index.php dont run ?! Message-ID: <4f1467ff73ccbbb9ffcb495bf0e90762.NginxMailingListEnglish@forum.nginx.org> Hello I use Centos 6.3 with nginx and php-FPM and FastCGI I have a website dafuq.gr when i write dafuq.gr dont open gives error with dafuq.gr/index.php opens ! Any idea ? i checked at /etc/nginx/sites-available the vhost file it looks like server { listen *:80; server_name dafuq.gr www.dafuq.gr; root /var/www/dafuq.gr/web; index index.php ; error_page 400 /error/400.html; error_page 401 /error/401.html; error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 /error/500.html; error_page 502 /error/502.html; error_page 503 /error/503.html; recursive_error_pages on; location = /error/400.html { internal; } location = /error/401.html { internal; } location = /error/403.html { internal; } location = /error/404.html { internal; } location = /error/405.html { internal; } location = /error/500.html { internal; } location = /error/502.html { internal; } location = /error/503.html { internal; } error_log /var/log/ispconfig/httpd/dafuq.gr/error.log; access_log /var/log/ispconfig/httpd/dafuq.gr/access.log combined; ## Disable .htaccess and other hidden files location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/clients/client1/web1/.htpasswd_stats; } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9010; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; } location /cgi-bin/ { try_files $uri =404; include /etc/nginx/fastcgi_params; root /var/www/clients/client1/web1; gzip off; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } rewrite ^/rss /rss.php last; rewrite ^/submit_video /index.php?task=submit&type=videos last; rewrite ^/submit_image /index.php?task=submit&type=images last; rewrite ^/error /index.php?task=error last; rewrite ^/page-([0-9]+) /index.php?page=$1 last; rewrite ^/page/([0-9]+) /index.php?task=view_page&id=$1 last; rewrite ^/page/([^/.]+) /index.php?task=view_page&name=$1 last; rewrite ^/top-views /index.php?task=top-views last; rewrite ^/top-rated /index.php?task=top-rated last; rewrite ^/category/([a-zA-Z0-9-]+)/page/([0-9]+) /index.php?task=category&category=$1&page=$2 last; rewrite ^/category/([a-zA-Z0-9-]+) /index.php?task=category&category=$1 last; rewrite ^/connect/([a-zA-Z0-9-]+) /index.php?task=connect&type=$1 last; rewrite ^/login /index.php?task=login last; rewrite ^/callback/([a-zA-Z0-9-]+) /index.php?task=callback&type=$1 last; rewrite ^/logout /index.php?task=logout last; rewrite ^/user/([a-zA-Z0-9_-]+)/([a-zA-Z0-9-]+) /index.php?task=user&action=$1&code=$2 last; rewrite ^/user/([a-zA-Z0-9-]+) /index.php?task=user&action=$1 last; rewrite ^/user /index.php?task=user last; rewrite ^/search/([a-zA-Z0-9-]+)/page/([0-9]+) /index.php?task=search&s=$1&page=$2 last; rewrite ^/search/([a-zA-Z0-9-]+) /index.php?task=search&s=$1&page=1 last; rewrite ^/search /index.php?task=search last; rewrite ^/view/([0-9]+) /index.php?task=view&id=$1 last; rewrite ^/view/([0-9]+).([^/.]+) /index.php?task=view&id=$1 last; if (!-f $request_filename){ set $rule_23 1$rule_23; } if (!-d $request_filename){ set $rule_23 2$rule_23; } if ($rule_23 = "21"){ rewrite ^/([a-zA-Z0-9-]+).([a-zA-Z0-9]+) /index.php?task=view&name=$1 last; } if (!-f $request_filename){ set $rule_24 1$rule_24; } if (!-d $request_filename){ set $rule_24 2$rule_24; } if ($rule_24 = "21"){ rewrite ^/([a-zA-Z0-9-]+)/([0-9]+) /index.php?task=view&name=$1&user=$2 last; } if (!-f $request_filename){ set $rule_25 1$rule_25; } if (!-d $request_filename){ set $rule_25 2$rule_25; } if ($rule_25 = "21"){ rewrite ^/([a-zA-Z0-9-]+) /index.php?task=view&name=$1 last; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233008,233008#msg-233008 From ru at nginx.com Tue Nov 20 13:44:34 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 20 Nov 2012 17:44:34 +0400 Subject: server_name In-Reply-To: <50AB6E2B.8000408@qarea.com> References: <50AB62CD.1050803@qarea.com> <50AB64EC.3010200@citrin.ru> <50AB6A27.40403@qarea.com> <50AB6B18.7010004@citrin.ru> <50AB6D8B.2090200@qarea.com> <50AB6E2B.8000408@qarea.com> Message-ID: <20121120134434.GC43746@lo0.su> On Tue, Nov 20, 2012 at 01:48:59PM +0200, Gregory Edigarov wrote: > On 11/20/2012 01:46 PM, Gregory Edigarov wrote: > > On 11/20/2012 01:35 PM, Anton Yuzhaninov wrote: > >> On 11/20/12 15:31, Gregory Edigarov wrote: > >>>> > >>>> try_files /subdoms/$subdom @fallback; > >>>> > >>> ???????, ?? ??????-?? ?? ????????. > >>> ??? ?????? ?? fallback. > >>> /subdoms/$subdom - ???? ?? ???? ???????? ?????????????, ????? ??? > >>> ???? ????????? > >> > >> ??? ???????? try_file ??????? ??? ?? ???: > >> > >> root /subdoms/$subdom; > >> ... > >> try_file $uri $uri/ @fallback; > >> > > ???-????? ?? ?????. ?????? $subdom ???? ??????. ?? ?????????? > ? ????????? /subdoms/$subdom - 100% ?????????? ? ???????? ???????????? ? ???????? root /subdoms/$subdom ??????? /subdoms/$subdom ?????? ???????????? ?? ????? ???????? ???????. ?? ????? ? ??? ??? ????? ???????????? "try_files /subdoms/$subdom" ??? ?? ????????, ??? ???, ?.?. try_files ???? ???????????? root, ??????? ? ??? ?? ??????, ? ?? ???? ??? ????? /. ???? ??????? ?????????, ?? ?????? ????????: server { server_name ~^(.*).example.com$; location / { root /tmp/foo/subdoms/$1; try_files $uri $uri/ =404; } $ grep ^ /tmp/foo/subdoms/test*/* /tmp/foo/subdoms/test1/foo:foo /tmp/foo/subdoms/test1/index.html:this is test1 /tmp/foo/subdoms/test2/index.html:this is test2 $ curl http://test1.example.com:8000/ this is test1 $ curl http://test2.example.com:8000/ this is test2 $ curl http://test1.example.com:8000/foo foo (?????????? ? ????? ??????????? ?????, ????? ????? ?????? ?? ??? ???????? ?? ????????? ??? ??????? ?????.) From gautier.difolco at gmail.com Tue Nov 20 13:44:57 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 20 Nov 2012 14:44:57 +0100 Subject: Routing verbose logging Message-ID: Hi all, I have to have a precise trace of which rules are used to proccess my request and which headers are set. Is their a way to print that? I tried the debug-mode but it doesn't help me. For your help, Thanks by advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Nov 20 14:08:39 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 20 Nov 2012 18:08:39 +0400 Subject: Routing verbose logging In-Reply-To: References: Message-ID: <201211201808.39201.vbart@nginx.com> On Tuesday 20 November 2012 17:44:57 Gautier DI FOLCO wrote: > Hi all, > > I have to have a precise trace of which rules are used to proccess my > request > and which headers are set. > Is their a way to print that? I tried the debug-mode but it doesn't help > me. > http://nginx.org/r/rewrite_log http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From gautier.difolco at gmail.com Tue Nov 20 14:13:24 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 20 Nov 2012 15:13:24 +0100 Subject: Routing verbose logging In-Reply-To: <201211201808.39201.vbart@nginx.com> References: <201211201808.39201.vbart@nginx.com> Message-ID: 2012/11/20 Valentin V. Bartenev > On Tuesday 20 November 2012 17:44:57 Gautier DI FOLCO wrote: > > Hi all, > > > > I have to have a precise trace of which rules are used to proccess my > > request > > and which headers are set. > > Is their a way to print that? I tried the debug-mode but it doesn't help > > me. > > > > http://nginx.org/r/rewrite_log > http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 20 16:44:45 2012 From: nginx-forum at nginx.us (crirus) Date: Tue, 20 Nov 2012 11:44:45 -0500 Subject: MP4 pseudostreaming - seek delay In-Reply-To: References: Message-ID: Hello Did you find any solution for large moov files? Regards Cris Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223375,232998#msg-232998 From nginx-forum at nginx.us Tue Nov 20 18:00:30 2012 From: nginx-forum at nginx.us (dagr) Date: Tue, 20 Nov 2012 13:00:30 -0500 Subject: MP4 pseudostreaming - seek delay In-Reply-To: References: Message-ID: <341359c1823666ef3817f4cae9e2f40d.NginxMailingListEnglish@forum.nginx.org> No, i gave up and still use flv Posted at Nginx Forum: http://forum.nginx.org/read.php?2,223375,233023#msg-233023 From nginx-forum at nginx.us Tue Nov 20 20:11:51 2012 From: nginx-forum at nginx.us (shmapty) Date: Tue, 20 Nov 2012 15:11:51 -0500 Subject: proxy_cache_valid for zero seconds In-Reply-To: References: Message-ID: Put another way -- can I store/cache all content from the proxied upstream (up to the limits defined in proxy_cache_path), but serve from the cache only when the proxied upstream fails (e.g. timeout, error)? We have content that should be dynamic, and hence every request should be transparently proxied. However, I want to protect against the situation when the upstream is down or having trouble. Serving a stale response (though normally undesirable), is better than returning an error. Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232815,233027#msg-233027 From monthadar at gmail.com Tue Nov 20 21:29:04 2012 From: monthadar at gmail.com (Monthadar Al Jaberi) Date: Tue, 20 Nov 2012 22:29:04 +0100 Subject: nginx + fossil configuration problem Message-ID: Hi, I am new to nginx. My issue is with nginx configuration that should proxy a connection to a fossil server I am running in the background. This fossil server is serving a folder of fossils. I found example configuration on the net and I ended up having this: .... server { listen 80; server_name locahost; location / { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } .... I started from the original nginx.conf file in /etc/nginx. I am running nginx in ArchLinux with minimum packages installed (no X). I am starting nginx with systemd. And for now I am starting fossil manually: fossil server /path/to/fossils/ >From my host PC I seem to be able to visit my different fossil projects 192.168.0.101/aaa and 192.168.0.101/bbb. But this seems to be accidental, because if I move this server block under the default server blocks it stops working. If I have it above I cant seems to access the php location block in the default server block that I added, 192.169.0.101/index.php don't work. Testing from withing the archlinux running nginx: localhost/ localhost/index.html localhost/index.php All of these works. But localhost/aaa don't work. If I run the following it works: lynx localhost:8080/aaa It seems I am missing some last touch. I want to be able to do something like 192.168.0.101/fossil/aaa. Thank you for any advice! attaching the whole nginx.conf: #user html; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name locahost; location / { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; #location /fossil/ { # proxy_pass http://localhost:8080/; # proxy_redirect off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #} location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; root /usr/share/nginx/html; include fastcgi.conf; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } -- Monthadar Al Jaberi From francis at daoine.org Tue Nov 20 23:24:38 2012 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Nov 2012 23:24:38 +0000 Subject: nginx + fossil configuration problem In-Reply-To: References: Message-ID: <20121120232438.GC18139@craic.sysops.org> On Tue, Nov 20, 2012 at 10:29:04PM +0100, Monthadar Al Jaberi wrote: Hi there, This isn't a full answer, but hopefully will point you in the right direction. > server { > listen 80; > server_name locahost; That is "locahost", not "localhost". That is the reason that the order of server{} blocks matters. > location / { > proxy_pass http://localhost:8080/; ... > From my host PC I seem to be able to visit my different fossil > projects 192.168.0.101/aaa and 192.168.0.101/bbb. If that much works, then you've got a good start. > But this seems to be accidental, because if I move this server block > under the default server blocks it stops working. Not quite: because you have the same "listen" directive in each block, whichever is first in the file *is* the default. (http://nginx.org/en/docs/http/server_names.html probably includes more than you want to know.) So: when this is the default server block, your fossil access works; when it isn't, it doesn't. That is down to how nginx chooses which one server block to use for this request. > If I have it above I > cant seems to access the php location block in the default server > block that I added, 192.169.0.101/index.php don't work. One request is handled in one server block (usually chosen by comparing the Host: header with the server_name value), and then in one location within that server. Your configuration either uses too many server blocks, or ones with incorrect server_names. > Testing from withing the archlinux running nginx: > localhost/ > localhost/index.html > localhost/index.php Those will all use the one server block that has "server_name localhost" which, below, says "php goes to php-fpm.sock, all else goes to the filesystem". > All of these works. But localhost/aaa don't work. That will also use that same server block. So it will serve files from /usr/local/nginx/html/aaa. > If I run the > following it works: > > lynx localhost:8080/aaa That will use the fossil service directly, avoiding nginx. > It seems I am missing some last touch. I want to be able to do > something like 192.168.0.101/fossil/aaa. Decide exactly what url hierarchy you want to use to access nginx to reverse proxy to fossil. That means: which hostname and which /location prefix or prefixes. Then in the correct server{} block, add the location{} block with the proxy_pass stuff that you have that already works. If you want to use *different* hostnames to access fossil and not-fossil, then you will need to configure location{} blocks in different server{} blocks. If you want to use the *same* hostname to access fossil and not-fossil, then you will need to configure different location{} blocks in the same server{} block to tell nginx which urls should go to fossil and which ones should not. Briefly: move your (working) "location /" block into the "server_name localhost" server block, and change it to be (perhaps) "location /aaa". That might show whether you are moving in the right direction. Good luck, f -- Francis Daly francis at daoine.org From agentzh at gmail.com Tue Nov 20 23:47:31 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 20 Nov 2012 15:47:31 -0800 Subject: [ANN] ngx_openresty devel version 1.2.4.9 released In-Reply-To: References: Message-ID: Hello, folks! I am happy to announce the new development version of ngx_openresty, 1.2.4.9: http://openresty.org/#Download Special thanks go to all our contributors and users for helping make this happen! Below is the complete change log for this release, as compared to the last (devel) release, 1.2.4.7: * upgraded LuaJIT to 2.0.0 final. * change logs: * upgraded LuaNginxModule to 0.7.5. * bugfix: ngx.req.clear_header() would result in memory invalid reads when removing the 21st, 41st, 61st (and etc) request headers. thanks Umesh Sirsiwal for reporting this issue. * bugfix: ngx.log() would truncate the log messages which have null characters ("\0") in it. thanks Wang Xi for reporting this issue. * docs: eliminated the use of "package.seeall" in code samples and also explicitly discouraged the use of it. * docs: documented the special case that client closes the connection before ngx.req.socket() finishes reading the whole body. * upgraded HeadersMoreNginxModule to 0.19. * bugfix: more_clear_input_headers would result in memory invalid reads when removing the 21st, 41st, 61st (and etc) request headers. thanks Umesh Sirsiwal for reporting this issue. * docs: fixed an issue in the sample code that tried to clear "Transfer-Encoding" which cannot actually be cleared. thanks koukou73gr. * upgraded LuaRestyStringLibrary to 0.08. * bugfix: the "new()" method in the "resty.aes" module might use a random key when the "method" option is omitted in the "hash" table argument. thanks wsser for the patch. * feature: we now return a second string describing the error when either "iv" or "key" is bad. * bugfix: "./configure --with-pcre=PATH" did not accept relative paths as "PATH". thanks smallfish for reporting this issue. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1002004 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have been running extensive testing on our Amazon EC2 test cluster and ensure that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From pinakee at vvidiacom.com Wed Nov 21 02:12:18 2012 From: pinakee at vvidiacom.com (pinakee at vvidiacom.com) Date: Wed, 21 Nov 2012 02:12:18 +0000 Subject: As Proxy In-Reply-To: <003f01cdc633$4884d8f0$d98e8ad0$@vvidiacom.com> References: <003f01cdc633$4884d8f0$d98e8ad0$@vvidiacom.com> Message-ID: <265270181-1353463970-cardhu_decombobulator_blackberry.rim.net-1179161010-@b16.c8.bise7.blackberry> Hi, We would really appreciate if anyone could help us with the issue in my previous mail. We would like to use nginx for our deployment but are stuck with proxy thing. Looking forward to your response and support... Thanks, Pinakee Sent on my BlackBerry? from Vodafone -----Original Message----- From: "Pinakee Biswas" Date: Mon, 19 Nov 2012 14:22:55 To: Cc: Subject: As Proxy Hi, We are trying to use nginx as a proxy. We have a pylons framework for our application which uses paster to deliver the resources. PFA the configuration we have for nginx. Somehow the css files and images are not getting delivered. We have tried the following mechanisms: 1. Configuring Nginx as a pure proxy where in all the resources would be delivered by paster. 2. Configuring Nginx such that nginx delivers the static resources whereas the rest are delivered by paste. Both are not working for us. Somehow the static resources (like css, images) are not getting delivered. We were using Apache earlier where the option 1 (as mentioned above was working fine). We are new to nginx. We would really appreciate if you could please let us know what we are doing wrong and what the reason for the above could be. Looking forward to your response and help. Thanks, Pinakee Biswas -------------- next part -------------- An HTML attachment was scrubbed... URL: From djczaski at gmail.com Wed Nov 21 04:06:36 2012 From: djczaski at gmail.com (djczaski) Date: Tue, 20 Nov 2012 23:06:36 -0500 Subject: Caching authentication requests? Message-ID: I want to use something like auth PAM but this seems to cause a PAM conversation for every request and that is slower. Is there a way to cache a successful authentication so that an authentication happens only once or every so many seconds? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tchao.china at yahoo.com Wed Nov 21 05:02:56 2012 From: tchao.china at yahoo.com (Tian Chao) Date: Tue, 20 Nov 2012 21:02:56 -0800 (PST) Subject: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi Message-ID: <1353474176.81973.YahooMailNeo@web163605.mail.gq1.yahoo.com> Hi? I run python in backend, and use nginx to proxy queries to python by using fastcgi. I found some strange things in my nginx log those days. Some queries' $request_time are up to 40 seconds which their $upstream_response_time are less than 1 second. Here is my fastcgi related conf. fastcgi_max_temp_file_size 0; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_NAME ""; fastcgi_pass python_backend; fastcgi_buffers 1024 8K; Do you guys know what's the problem? Chao -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Wed Nov 21 07:25:10 2012 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 21 Nov 2012 11:25:10 +0400 Subject: cache manager process exited with fatal code 2 and cannot be respawned In-Reply-To: <509D4C67.8090807@heinlein-support.de> References: <509A2EA2.60704@heinlein-support.de> <509BCC08.7020505@heinlein-support.de> <509D01DB.5060607@heinlein-support.de> <509D2EF9.1040008@nginx.com> <509D4C67.8090807@heinlein-support.de> Message-ID: <50AC81D6.8010704@nginx.com> On 11/9/12 10:33 PM, Isaac Hailperin wrote: > > > On 11/09/2012 05:27 PM, Maxim Konovalov wrote: >> On 11/9/12 5:15 PM, Isaac Hailperin wrote: >> [...] >>> I also wonder where the 512 worker_connections from the error >>> message come from. There is no such number in my config. Is it >>> hardcoded somewhere? >>> >> http://nginx.org/en/docs/ngx_core_module.html#worker_connections >> >> It's a default number of worker_connections. > Yes, but if I specify a differen number, like > http://www.ruby-forum.com/topic/4407591#1083581 > this should be different. Now this could lead to the conclusion, > that nginx is not reading that file, but nginx -t clearly says so. > Also, > if I introduce syntactic errors in that file, nginx complains. > > As Igor Sysoev suggested earlier > http://www.ruby-forum.com/topic/4407591#1083572 > the worker_connection parameter might not be related, since also > cache manager and loader use connections. > If these are hard coded to a max of 512, this might be the cause: > there are exactly 1002 vhosts which each listen on a different port. > Now its not 1024, which would be 512*2, but may be there is some > overhead which makes me come to this limit? > If my thinking is correct (?), is there a way to overcome this > limit? (other then using just one port for ssl ... it would mean > using different ip addresses, which would have the same effect, I > guess?) > > Any thoughts on this are welcome. > > Isaac > Just for the record -- the issue should be fixed by r4918: http://trac.nginx.org/nginx/changeset/4918/nginx -- Maxim Konovalov +7 (910) 4293178 http://nginx.com/support.html From nginx-forum at nginx.us Wed Nov 21 11:12:40 2012 From: nginx-forum at nginx.us (nri.pl) Date: Wed, 21 Nov 2012 06:12:40 -0500 Subject: internally redirected requests Message-ID: How to do internal redirect (which is the best way)? I do not want to apply to this "include". Maybe I can do something like this: location /a { alias @app; } location /b { alias @app; } location @app { ... } Should I use to this "error_page" or "try_files" if I do not dealing with static files ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233074,233074#msg-233074 From marco.pasqualini at studiovatore.com Wed Nov 21 11:17:26 2012 From: marco.pasqualini at studiovatore.com (Marco Pasqualini) Date: Wed, 21 Nov 2012 12:17:26 +0100 Subject: Problems with php DOMDocument class (URGENT) Message-ID: Hello to everyone, Unfortunately I have an urgent problem that I can not solve. I'm migrating a webapp that runs on nginx in a CentOS 6 server. The webapp use DOMDocument php class. The php-xml module is installed correctly: phpInfo shows dom section enabled. In nginx error.log I keep getting the following error: [error] 19403#0: *1 FastCGI sent in stderr: "PHP Fatal error: Class 'DOMDocument' not found in /home/ixtanteuser/webroot/Ixtante/classes/lib/config/Config.class.php on line 51" while reading response header from upstream, client: 213.187.3.72, server: vo.dev.viralbis.com, request: "GET /test.php/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:1903", host: "vo.dev.viralbis.com:8091" I simply copied the files from the server being disposed, including nginx.conf, where everything works perfectly. Any help is appreciated! -- Pasqualini Ing. Marco Project Manager at StudioVatore Contacts: Mail: marco.pasqualini at studiovatore.com Web: http://www.studiovatore.com Phone: 0425 073641 Fax: 0425 019813 RISERVATO: Questo messaggio e gli eventuali allegati sono confidenziali e riservati. Se vi ? stato recapitato per errore e non siete fra i destinatari elencati, siete pregati di darne immediatamente avviso al mittente. Le informazioni contenute non devono essere mostrate ad altri, n? utilizzate, memorizzate o copiate in qualsiasi forma. CONFIDENTIAL: This e-mail and any attachments are confidential and may contain reserved information. If you are not one of the named recipients, please notify the sender immediately. Moreover, you should not disclose the contents to any other persons, nor should the information container be used for any purpose or stored or copied in any form. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Wed Nov 21 11:21:07 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 21 Nov 2012 18:21:07 +0700 Subject: internally redirected requests In-Reply-To: References: Message-ID: On Wed, Nov 21, 2012 at 6:12 PM, nri.pl wrote: > How to do internal redirect (which is the best way)? I do not want to apply > to this "include". > Maybe I can do something like this: > > location /a { alias @app; } > location /b { alias @app; } > > location @app { ... } > > Should I use to this "error_page" or "try_files" if I do not dealing with > static files ? > you can use `try_files /nonexistent @app;` though I'm interested why don't you want to use include. From edho at myconan.net Wed Nov 21 11:23:08 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 21 Nov 2012 18:23:08 +0700 Subject: Problems with php DOMDocument class (URGENT) In-Reply-To: References: Message-ID: On Wed, Nov 21, 2012 at 6:17 PM, Marco Pasqualini wrote: > Hello to everyone, > Unfortunately I have an urgent problem that I can not solve. > > I'm migrating a webapp that runs on nginx in a CentOS 6 server. > The webapp use DOMDocument php class. > The php-xml module is installed correctly: phpInfo shows dom section > enabled. > try comparing output of phpinfo between new and old servers. From nginx-forum at nginx.us Wed Nov 21 11:51:39 2012 From: nginx-forum at nginx.us (nri.pl) Date: Wed, 21 Nov 2012 06:51:39 -0500 Subject: internally redirected requests In-Reply-To: References: Message-ID: <5b7d3dda9aa6b1c2f5289c6bedc2ba3a.NginxMailingListEnglish@forum.nginx.org> Heterogeneous environment. I can not use "incude" because the configuration file is generated by the application. I am responsible for the deployment. Problematic is for application developers to generate and manage multiple configuration files, much easier to manage one file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233074,233080#msg-233080 From ru at nginx.com Wed Nov 21 15:13:24 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 21 Nov 2012 19:13:24 +0400 Subject: internally redirected requests In-Reply-To: References: Message-ID: <20121121151324.GB91347@lo0.su> On Wed, Nov 21, 2012 at 06:12:40AM -0500, nri.pl wrote: > How to do internal redirect (which is the best way)? I do not want to apply > to this "include". > Maybe I can do something like this: > > location /a { alias @app; } > location /b { alias @app; } > > location @app { ... } > > Should I use to this "error_page" or "try_files" if I do not dealing with > static files ? location /a { return 418; error_page 418 = @teapot; } location /b { return 418; error_page 418 = @teapot; } location @teapot { return 200 "uri=$uri\nargs=$args\n"; } See: http://nginx.org/r/internal http://nginx.org/r/error_page From goelvivek2011 at gmail.com Wed Nov 21 16:51:51 2012 From: goelvivek2011 at gmail.com (Vivek Goel) Date: Wed, 21 Nov 2012 22:21:51 +0530 Subject: How nginx define a free worker? Message-ID: Hi, If I have n cores and I am running n nginx worker process how nginx will decide free worker for next connection? 1. Will it be doing round robin? If it is not using round robin what method it use? Is there a way I can force it to use round robin method? regards Vivek Goel -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Wed Nov 21 17:36:09 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Wed, 21 Nov 2012 21:36:09 +0400 Subject: How nginx define a free worker? In-Reply-To: References: Message-ID: <50AD1109.4080702@citrin.ru> On 11/21/12 20:51, Vivek Goel wrote: > If I have n cores and I am running n nginx worker process how nginx will decide > free worker for next connection? > > 1. Will it be doing round robin? > > If it is not using round robin what method it use? Is there a way I can force it > to use round robin method? > regards Load distribution between worker processes affected by accept_mutex http://nginx.org/r/accept_mutex Default is to use accept mutex and if load is low, most request will be handled by one worker. On heavy loaded server load destribution between worker processes will be more uniform. You can switch off accept_mutex and load will be more uniform even with low load, but all worker processes will be waken up on each new connection and only one worker can accept given connection: http://en.wikipedia.org/wiki/Thundering_herd_problem In case of nginx number of processes is usually low, and negative impact of accept_mutex off should be low. -- Anton Yuzhaninov From marco.pasqualini at studiovatore.com Wed Nov 21 17:58:18 2012 From: marco.pasqualini at studiovatore.com (Marco Pasqualini) Date: Wed, 21 Nov 2012 18:58:18 +0100 Subject: Problems with php DOMDocument class (URGENT) In-Reply-To: References: Message-ID: I noticed that if I view phpInfo() through apache there is properly the DOM section (DOM/XML enabled ecc...) instead, if I view phpInfo() through nginx the DOM tab does not appear at all! So the question is: how do I make sure that the php in nginx can see the php-xml? But php.ini is not the same between apache and nginx? Please help! I'm new in nginx... and looking on google and I have not seen similar problems... Thanks 2012/11/21 Edho Arief > On Wed, Nov 21, 2012 at 6:17 PM, Marco Pasqualini > wrote: > > Hello to everyone, > > Unfortunately I have an urgent problem that I can not solve. > > > > I'm migrating a webapp that runs on nginx in a CentOS 6 server. > > The webapp use DOMDocument php class. > > The php-xml module is installed correctly: phpInfo shows dom section > > enabled. > > > > try comparing output of phpinfo between new and old servers. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Pasqualini Ing. Marco Project Manager at StudioVatore Contacts: Mail: marco.pasqualini at studiovatore.com Web: http://www.studiovatore.com Phone: 0425 073641 Fax: 0425 019813 RISERVATO: Questo messaggio e gli eventuali allegati sono confidenziali e riservati. Se vi ? stato recapitato per errore e non siete fra i destinatari elencati, siete pregati di darne immediatamente avviso al mittente. Le informazioni contenute non devono essere mostrate ad altri, n? utilizzate, memorizzate o copiate in qualsiasi forma. CONFIDENTIAL: This e-mail and any attachments are confidential and may contain reserved information. If you are not one of the named recipients, please notify the sender immediately. Moreover, you should not disclose the contents to any other persons, nor should the information container be used for any purpose or stored or copied in any form. -------------- next part -------------- An HTML attachment was scrubbed... URL: From monthadar at gmail.com Wed Nov 21 18:16:41 2012 From: monthadar at gmail.com (Monthadar Al Jaberi) Date: Wed, 21 Nov 2012 19:16:41 +0100 Subject: nginx + fossil configuration problem In-Reply-To: <20121120232438.GC18139@craic.sysops.org> References: <20121120232438.GC18139@craic.sysops.org> Message-ID: Thank you for your explanations! The "locahost" error is from my side, I was testing different things and didn't notice. Sorry. Okej now I understand, I guess I want one server block, no sense in having fossil.localhost. So I want localhost/fossil/aaa. I moved the working location block inside the default server block: server { listen 80; server_name localhost; location /fossil/ { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; root /usr/share/nginx/html; include fastcgi.conf; } } Now all my other cases work except the fossil one. When I browse to localhost/fossil/aaa I see that the link changes to: http://localhost//aaa/index An extra '/' is added somehow. The page I get is from nginx 404, which I suppose means nginx did not proxy the request?? best regards, On Wed, Nov 21, 2012 at 12:24 AM, Francis Daly wrote: > On Tue, Nov 20, 2012 at 10:29:04PM +0100, Monthadar Al Jaberi wrote: > > Hi there, > > This isn't a full answer, but hopefully will point you in the right > direction. > >> server { >> listen 80; >> server_name locahost; > > That is "locahost", not "localhost". That is the reason that the order > of server{} blocks matters. > >> location / { >> proxy_pass http://localhost:8080/; > ... > >> From my host PC I seem to be able to visit my different fossil >> projects 192.168.0.101/aaa and 192.168.0.101/bbb. > > If that much works, then you've got a good start. > >> But this seems to be accidental, because if I move this server block >> under the default server blocks it stops working. > > Not quite: because you have the same "listen" directive in each block, > whichever is first in the file *is* the default. > > (http://nginx.org/en/docs/http/server_names.html probably includes more > than you want to know.) > > So: when this is the default server block, your fossil access works; > when it isn't, it doesn't. That is down to how nginx chooses which one > server block to use for this request. > >> If I have it above I >> cant seems to access the php location block in the default server >> block that I added, 192.169.0.101/index.php don't work. > > One request is handled in one server block (usually chosen by comparing > the Host: header with the server_name value), and then in one location > within that server. > > Your configuration either uses too many server blocks, or ones with > incorrect server_names. > >> Testing from withing the archlinux running nginx: >> localhost/ >> localhost/index.html >> localhost/index.php > > Those will all use the one server block that has "server_name localhost" > which, below, says "php goes to php-fpm.sock, all else goes to the > filesystem". > >> All of these works. But localhost/aaa don't work. > > That will also use that same server block. So it will serve files from > /usr/local/nginx/html/aaa. > >> If I run the >> following it works: >> >> lynx localhost:8080/aaa > > That will use the fossil service directly, avoiding nginx. > >> It seems I am missing some last touch. I want to be able to do >> something like 192.168.0.101/fossil/aaa. > > Decide exactly what url hierarchy you want to use to access nginx to > reverse proxy to fossil. > > That means: which hostname and which /location prefix or prefixes. > > Then in the correct server{} block, add the location{} block with the > proxy_pass stuff that you have that already works. > > If you want to use *different* hostnames to access fossil and not-fossil, > then you will need to configure location{} blocks in different server{} > blocks. > > If you want to use the *same* hostname to access fossil and not-fossil, > then you will need to configure different location{} blocks in the same > server{} block to tell nginx which urls should go to fossil and which > ones should not. > > Briefly: move your (working) "location /" block into the "server_name > localhost" server block, and change it to be (perhaps) "location /aaa". > > That might show whether you are moving in the right direction. > > Good luck, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Monthadar Al Jaberi From monthadar at gmail.com Wed Nov 21 18:32:53 2012 From: monthadar at gmail.com (Monthadar Al Jaberi) Date: Wed, 21 Nov 2012 19:32:53 +0100 Subject: nginx + fossil configuration problem In-Reply-To: References: <20121120232438.GC18139@craic.sysops.org> Message-ID: On Wed, Nov 21, 2012 at 7:16 PM, Monthadar Al Jaberi wrote: > Thank you for your explanations! > > The "locahost" error is from my side, I was testing different things > and didn't notice. Sorry. > > Okej now I understand, I guess I want one server block, no sense in > having fossil.localhost. > > So I want localhost/fossil/aaa. > > I moved the working location block inside the default server block: > > server { > listen 80; > server_name localhost; my bad remove the last slash after fossil below: > location /fossil/ { > proxy_pass http://localhost:8080/; > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > location / { > root /usr/share/nginx/html; > index index.html index.htm; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/share/nginx/html; > } > location ~ \.php$ { > fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; > fastcgi_index index.php; > root /usr/share/nginx/html; > include fastcgi.conf; > } > } > > Now all my other cases work except the fossil one. > > When I browse to localhost/fossil/aaa I see that the link changes to: > > http://localhost//aaa/index > > An extra '/' is added somehow. The page I get is from nginx 404, which > I suppose means nginx did not proxy the request?? > > best regards, > > On Wed, Nov 21, 2012 at 12:24 AM, Francis Daly wrote: >> On Tue, Nov 20, 2012 at 10:29:04PM +0100, Monthadar Al Jaberi wrote: >> >> Hi there, >> >> This isn't a full answer, but hopefully will point you in the right >> direction. >> >>> server { >>> listen 80; >>> server_name locahost; >> >> That is "locahost", not "localhost". That is the reason that the order >> of server{} blocks matters. >> >>> location / { >>> proxy_pass http://localhost:8080/; >> ... >> >>> From my host PC I seem to be able to visit my different fossil >>> projects 192.168.0.101/aaa and 192.168.0.101/bbb. >> >> If that much works, then you've got a good start. >> >>> But this seems to be accidental, because if I move this server block >>> under the default server blocks it stops working. >> >> Not quite: because you have the same "listen" directive in each block, >> whichever is first in the file *is* the default. >> >> (http://nginx.org/en/docs/http/server_names.html probably includes more >> than you want to know.) >> >> So: when this is the default server block, your fossil access works; >> when it isn't, it doesn't. That is down to how nginx chooses which one >> server block to use for this request. >> >>> If I have it above I >>> cant seems to access the php location block in the default server >>> block that I added, 192.169.0.101/index.php don't work. >> >> One request is handled in one server block (usually chosen by comparing >> the Host: header with the server_name value), and then in one location >> within that server. >> >> Your configuration either uses too many server blocks, or ones with >> incorrect server_names. >> >>> Testing from withing the archlinux running nginx: >>> localhost/ >>> localhost/index.html >>> localhost/index.php >> >> Those will all use the one server block that has "server_name localhost" >> which, below, says "php goes to php-fpm.sock, all else goes to the >> filesystem". >> >>> All of these works. But localhost/aaa don't work. >> >> That will also use that same server block. So it will serve files from >> /usr/local/nginx/html/aaa. >> >>> If I run the >>> following it works: >>> >>> lynx localhost:8080/aaa >> >> That will use the fossil service directly, avoiding nginx. >> >>> It seems I am missing some last touch. I want to be able to do >>> something like 192.168.0.101/fossil/aaa. >> >> Decide exactly what url hierarchy you want to use to access nginx to >> reverse proxy to fossil. >> >> That means: which hostname and which /location prefix or prefixes. >> >> Then in the correct server{} block, add the location{} block with the >> proxy_pass stuff that you have that already works. >> >> If you want to use *different* hostnames to access fossil and not-fossil, >> then you will need to configure location{} blocks in different server{} >> blocks. >> >> If you want to use the *same* hostname to access fossil and not-fossil, >> then you will need to configure different location{} blocks in the same >> server{} block to tell nginx which urls should go to fossil and which >> ones should not. >> >> Briefly: move your (working) "location /" block into the "server_name >> localhost" server block, and change it to be (perhaps) "location /aaa". >> >> That might show whether you are moving in the right direction. >> >> Good luck, >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Monthadar Al Jaberi -- Monthadar Al Jaberi From francis at daoine.org Wed Nov 21 18:36:59 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Nov 2012 18:36:59 +0000 Subject: Problems with php DOMDocument class (URGENT) In-Reply-To: References: Message-ID: <20121121183659.GE18139@craic.sysops.org> On Wed, Nov 21, 2012 at 06:58:18PM +0100, Marco Pasqualini wrote: Hi there, > I noticed that if I view phpInfo() through apache there is properly the DOM > section (DOM/XML enabled ecc...) > instead, if I view phpInfo() through nginx the DOM tab does not appear at > all! > > So the question is: how do I make sure that the php in nginx can see the > php-xml? There isn't a php in nginx. nginx talks to the fastcgi server which "does" the php. apache tends to use an embedded php. So you'll want to find whatever php.ini is in effect in your apache setup, and make something similar be in effect for your fastcgi setup. nginx is not involved in this. You'll probably get a more-directly-useful answer from a php list. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 21 18:53:55 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Nov 2012 18:53:55 +0000 Subject: nginx + fossil configuration problem In-Reply-To: References: <20121120232438.GC18139@craic.sysops.org> Message-ID: <20121121185355.GF18139@craic.sysops.org> On Wed, Nov 21, 2012 at 07:16:41PM +0100, Monthadar Al Jaberi wrote: Hi there, > So I want localhost/fossil/aaa. > > I moved the working location block inside the default server block: > > location /fossil/ { > proxy_pass http://localhost:8080/; > } > Now all my other cases work except the fossil one. > > When I browse to localhost/fossil/aaa I see that the link changes to: > > http://localhost//aaa/index > > An extra '/' is added somehow. The page I get is from nginx 404, which > I suppose means nginx did not proxy the request?? No, nginx sent the request to fossil, which responded with a http redirect to this url. Then your browser asks nginx for //aaa/index, which does not exist as a file in the right place, hence 404. curl -i http://localhost/fossil/aaa to see exactly what comes back. What you (probably) want is for that redirect to be to /fossil/aaa/index, which is (ideally) down to the fossil configuration. Newer versions of fossil tend to handle things a bit better; possibly setting baseurl to http://localhost/fossil (or maybe just /fossil) when you start the fossil service will allow things to work for you. Good luck with it, f -- Francis Daly francis at daoine.org From monthadar at gmail.com Wed Nov 21 21:25:11 2012 From: monthadar at gmail.com (Monthadar Al Jaberi) Date: Wed, 21 Nov 2012 22:25:11 +0100 Subject: nginx + fossil configuration problem In-Reply-To: <20121121185355.GF18139@craic.sysops.org> References: <20121120232438.GC18139@craic.sysops.org> <20121121185355.GF18139@craic.sysops.org> Message-ID: On Wed, Nov 21, 2012 at 7:53 PM, Francis Daly wrote: > On Wed, Nov 21, 2012 at 07:16:41PM +0100, Monthadar Al Jaberi wrote: > > Hi there, > >> So I want localhost/fossil/aaa. >> >> I moved the working location block inside the default server block: >> > >> location /fossil/ { >> proxy_pass http://localhost:8080/; > >> } > >> Now all my other cases work except the fossil one. >> >> When I browse to localhost/fossil/aaa I see that the link changes to: >> >> http://localhost//aaa/index >> >> An extra '/' is added somehow. The page I get is from nginx 404, which >> I suppose means nginx did not proxy the request?? > > No, nginx sent the request to fossil, which responded with a http redirect > to this url. Then your browser asks nginx for //aaa/index, which does > not exist as a file in the right place, hence 404. > > curl -i http://localhost/fossil/aaa I changed this line: proxy_set_header Host $host; to: proxy_set_header Host $host:$proxy_port; And curl -i http://localhost/fossil/aaa returns: HTTP/1.1 302 Moved Temporarily Server: nginx/1.2.5 Date: Wed, 21 Nov 2012 21:22:10 GMT Content-Type: text/html; charset=utf-8 Content-Length: 79 Connection: keep-alive Location: http://localhost:8080//aaa/index X-Frame-Options: SAMEORIGIN Cache-control: no-cache

Redirect to Location: http://localhost:8080//aaa/index

So this redirection is just before calling fossil? Where do the extra '/' come from? I read a litte more and found a directive called rewrite, should I use it somehow to remove the xtra slash added? thnx > > to see exactly what comes back. > > What you (probably) want is for that redirect to be to /fossil/aaa/index, > which is (ideally) down to the fossil configuration. > > Newer versions of fossil tend to handle things a bit better; possibly > setting baseurl to http://localhost/fossil (or maybe just /fossil) > when you start the fossil service will allow things to work for you. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Monthadar Al Jaberi From monthadar at gmail.com Wed Nov 21 21:44:11 2012 From: monthadar at gmail.com (Monthadar Al Jaberi) Date: Wed, 21 Nov 2012 22:44:11 +0100 Subject: nginx + fossil configuration problem In-Reply-To: References: <20121120232438.GC18139@craic.sysops.org> <20121121185355.GF18139@craic.sysops.org> Message-ID: I think I got it to work using rewrite. location /fossil { rewrite /fossil/(.*) /$1 break; proxy_pass http://localhost:8080; proxy_redirect off; proxy_set_header Host $host:$proxy_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I hope this is a sane solution :) thank you for your help! br, On Wed, Nov 21, 2012 at 10:25 PM, Monthadar Al Jaberi wrote: > On Wed, Nov 21, 2012 at 7:53 PM, Francis Daly wrote: >> On Wed, Nov 21, 2012 at 07:16:41PM +0100, Monthadar Al Jaberi wrote: >> >> Hi there, >> >>> So I want localhost/fossil/aaa. >>> >>> I moved the working location block inside the default server block: >>> >> >>> location /fossil/ { >>> proxy_pass http://localhost:8080/; >> >>> } >> >>> Now all my other cases work except the fossil one. >>> >>> When I browse to localhost/fossil/aaa I see that the link changes to: >>> >>> http://localhost//aaa/index >>> >>> An extra '/' is added somehow. The page I get is from nginx 404, which >>> I suppose means nginx did not proxy the request?? >> >> No, nginx sent the request to fossil, which responded with a http redirect >> to this url. Then your browser asks nginx for //aaa/index, which does >> not exist as a file in the right place, hence 404. >> >> curl -i http://localhost/fossil/aaa > > I changed this line: > > proxy_set_header Host $host; > > to: > > proxy_set_header Host $host:$proxy_port; > > And curl -i http://localhost/fossil/aaa returns: > > HTTP/1.1 302 Moved Temporarily > Server: nginx/1.2.5 > Date: Wed, 21 Nov 2012 21:22:10 GMT > Content-Type: text/html; charset=utf-8 > Content-Length: 79 > Connection: keep-alive > Location: http://localhost:8080//aaa/index > X-Frame-Options: SAMEORIGIN > Cache-control: no-cache > > >

Redirect to Location: http://localhost:8080//aaa/index >

> > > So this redirection is just before calling fossil? Where do the extra > '/' come from? I read a litte more and found a directive called > rewrite, should I use it somehow to remove the xtra slash added? > > thnx > >> >> to see exactly what comes back. >> >> What you (probably) want is for that redirect to be to /fossil/aaa/index, >> which is (ideally) down to the fossil configuration. >> >> Newer versions of fossil tend to handle things a bit better; possibly >> setting baseurl to http://localhost/fossil (or maybe just /fossil) >> when you start the fossil service will allow things to work for you. >> >> Good luck with it, >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Monthadar Al Jaberi -- Monthadar Al Jaberi From francis at daoine.org Wed Nov 21 22:19:22 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Nov 2012 22:19:22 +0000 Subject: nginx + fossil configuration problem In-Reply-To: References: <20121120232438.GC18139@craic.sysops.org> <20121121185355.GF18139@craic.sysops.org> Message-ID: <20121121221922.GH18139@craic.sysops.org> On Wed, Nov 21, 2012 at 10:25:11PM +0100, Monthadar Al Jaberi wrote: > On Wed, Nov 21, 2012 at 7:53 PM, Francis Daly wrote: Hi there, > I changed this line: > > proxy_set_header Host $host; > > to: > > proxy_set_header Host $host:$proxy_port; I'd undo that change. It effectively means that you are bypassing nginx and going to fossil directly, which is probably not what you want. > So this redirection is just before calling fossil? No. The redirection is what the fossil server returned. Your nginx configuration was (almost) correct. I suggest to put it back the way it was, and then start configuring fossil to be reverse-proxied. > Where do the extra '/' come from? You changed "location /fossil/" to "location /fossil". The difference there is the extra / here. > I read a litte more and found a directive called > rewrite, should I use it somehow to remove the xtra slash added? No. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 21 22:43:27 2012 From: francis at daoine.org (Francis Daly) Date: Wed, 21 Nov 2012 22:43:27 +0000 Subject: nginx + fossil configuration problem In-Reply-To: References: <20121120232438.GC18139@craic.sysops.org> <20121121185355.GF18139@craic.sysops.org> Message-ID: <20121121224327.GI18139@craic.sysops.org> On Wed, Nov 21, 2012 at 10:44:11PM +0100, Monthadar Al Jaberi wrote: Hi there, > I think I got it to work using rewrite. I think you are bypassing nginx. That's fine if you want to; but in that case you could have just gone to fossil directly in the first place. Try http://192.168.0.101/fossil/aaa If you see "8080" in your browser bar, you're not using nginx. > location /fossil { Change that to "location /fossil/ {" or (better) "location ^~ /fossil/ {" > rewrite /fossil/(.*) /$1 break; Remove that. > proxy_pass http://localhost:8080; > proxy_redirect off; > proxy_set_header Host $host:$proxy_port; Change that back to "proxy_set_header Host $host;" > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; I don't think those lines make any difference to fossil. It's clearer to remove them, too. > } > > > I hope this is a sane solution :) I don't think that it will work from off your own server. If it does what you want, then it is good enough. But I think that when you test from another machine, you will see a problem. Even after you put it back the way it was, you still will not see it work cleanly, because your fossil is not configured to be reverse proxied at a different url. By that, I mean: you request http://server/fossil/aaa, but fossil returns links and redirections assuming that you requested http://server/aaa (which, as far as fossil is concerned, you did). The current way to adjust fossil to be reverse proxied at a different place in the url hierarchy seems to be to start it like SCRIPT_NAME=/fossil fossil server /path/to/fossils/ (and that means that you won't be able to access it directly at http://localhost:8080/aaa). f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Nov 22 02:48:47 2012 From: nginx-forum at nginx.us (goelvivek) Date: Wed, 21 Nov 2012 21:48:47 -0500 Subject: How nginx define a free worker? In-Reply-To: <50AD1109.4080702@citrin.ru> References: <50AD1109.4080702@citrin.ru> Message-ID: <39fd7c10f9b65b6172efe1d66ac92a5d.NginxMailingListEnglish@forum.nginx.org> I am running nginx with high number of worker. So I think disable accept_mutex will cause high load on the system. Is there any other method I can choose in nginx like simple round robin b/w workers? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233093,233111#msg-233111 From edmund.lhot at gmail.com Thu Nov 22 03:21:48 2012 From: edmund.lhot at gmail.com (Edmund Lhot) Date: Thu, 22 Nov 2012 01:21:48 -0200 Subject: SSL proxy without certificate Message-ID: Hello! I want to proxy ssl connections to a backend without a certicate but it isn't working: server { listen x.x.x.x:443; location / { proxy_pass https://y.y.y.y:443; } } I tried to use an approach like this (client auth with self generated cert), but it didn't work too: server { listen x.x.x.x:443 ssl; ssl on; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca.crt; ssl_verify_client optional; location / { proxy_pass https://y.y.y.y:443; } } Must I have the customer certificate to proxy this kind of request or there is another way to do this? Tks! Edmund -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Thu Nov 22 03:27:03 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 22 Nov 2012 10:27:03 +0700 Subject: SSL proxy without certificate In-Reply-To: References: Message-ID: On Thu, Nov 22, 2012 at 10:21 AM, Edmund Lhot wrote: > Hello! > > I want to proxy ssl connections to a backend without a certicate but it > isn't working: > > server { > listen x.x.x.x:443; > location / { > proxy_pass https://y.y.y.y:443; > } > } > > I tried to use an approach like this (client auth with self generated cert), > but it didn't work too: > How is it not working? > server { > > listen x.x.x.x:443 ssl; > > ssl on; > ssl_certificate /etc/nginx/certs/server.crt; > ssl_certificate_key /etc/nginx/certs/server.key; > ssl_client_certificate /etc/nginx/certs/ca.crt; > ssl_verify_client optional; > > location / { > proxy_pass https://y.y.y.y:443; > > } > } > > Must I have the customer certificate to proxy this kind of request or there > is another way to do this? > I think the one you want is tcp layer proxying/balancing which is not what nginx can do. Try using HAProxy instead. From edmund.lhot at gmail.com Thu Nov 22 03:48:13 2012 From: edmund.lhot at gmail.com (Edmund Lhot) Date: Thu, 22 Nov 2012 01:48:13 -0200 Subject: SSL proxy without certificate In-Reply-To: References: Message-ID: On Thu, Nov 22, 2012 at 1:27 AM, Edho Arief wrote: > On Thu, Nov 22, 2012 at 10:21 AM, Edmund Lhot > wrote: > > Hello! > > > > I want to proxy ssl connections to a backend without a certicate but it > > isn't working: > > > > server { > > listen x.x.x.x:443; > > location / { > > proxy_pass https://y.y.y.y:443; > > } > > } > > > > I tried to use an approach like this (client auth with self generated > cert), > > but it didn't work too: > > > > How is it not working? > 2012/11/22 01:34:00 [error] 17649#0: *234 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: z.z.z.z, server: x.x.x.x:443 > > > server { > > > > listen x.x.x.x:443 ssl; > > > > ssl on; > > ssl_certificate /etc/nginx/certs/server.crt; > > ssl_certificate_key /etc/nginx/certs/server.key; > > ssl_client_certificate /etc/nginx/certs/ca.crt; > > ssl_verify_client optional; > > > > location / { > > proxy_pass https://y.y.y.y:443; > > > > } > > } > > > > Must I have the customer certificate to proxy this kind of request or > there > > is another way to do this? > > > > In this way proxy worked but not using the backend certificate, so I got these messages in my browser. :( The identity of this website has not been verified. ? Server's certificate does not match the URL. ? Server's certificate is not trusted. > I think the one you want is tcp layer proxying/balancing which is not > what nginx can do. Try using HAProxy instead. > I'll try. Tks. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tchao.china at yahoo.com Thu Nov 22 03:52:12 2012 From: tchao.china at yahoo.com (Tian Chao) Date: Wed, 21 Nov 2012 19:52:12 -0800 (PST) Subject: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi In-Reply-To: <1353474176.81973.YahooMailNeo@web163605.mail.gq1.yahoo.com> References: <1353474176.81973.YahooMailNeo@web163605.mail.gq1.yahoo.com> Message-ID: <1353556332.36931.YahooMailNeo@web163603.mail.gq1.yahoo.com> Does anybody have a clue about this problem? ________________________________ From: Tian Chao To: "nginx at nginx.org" Sent: Wednesday, November 21, 2012 1:02 PM Subject: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi Hi? I run python in backend, and use nginx to proxy queries to python by using fastcgi. I found some strange things in my nginx log those days. Some queries' $request_time are up to 40 seconds which their $upstream_response_time are less than 1 second. Here is my fastcgi related conf. fastcgi_max_temp_file_size 0; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_NAME ""; fastcgi_pass python_backend; fastcgi_buffers 1024 8K; Do you guys know what's the problem? Chao -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Nov 22 03:56:02 2012 From: r at roze.lv (Reinis Rozitis) Date: Thu, 22 Nov 2012 05:56:02 +0200 Subject: SSL proxy without certificate In-Reply-To: References: Message-ID: <0AAE23BD92424D1198DAABCBD2342AB3@NeiRoze> > In this way proxy worked but not using the backend certificate, so I got > these messages in my browser. :( > The identity of this website has not been verified. > Server's certificate does not match the URL. > Server's certificate is not trusted. You need to use/configure the same SSL certificates on nginx as on the backend eg just proxy_pass'ing to backend won't work. But is there a reason for "talking" to backend via https? The common approach (also better performance) is offloading the SSL to nginx and proxying via plain http. > I think the one you want is tcp layer proxying/balancing which is not what > nginx can do. Not exactly true https://github.com/yaoweibin/nginx_tcp_proxy_module , but that is kind of another topic. rr From r at roze.lv Thu Nov 22 04:04:57 2012 From: r at roze.lv (Reinis Rozitis) Date: Thu, 22 Nov 2012 06:04:57 +0200 Subject: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi In-Reply-To: <1353556332.36931.YahooMailNeo@web163603.mail.gq1.yahoo.com> References: <1353474176.81973.YahooMailNeo@web163605.mail.gq1.yahoo.com> <1353556332.36931.YahooMailNeo@web163603.mail.gq1.yahoo.com> Message-ID: <9BE29FF9EBC54B728574945C22A168B0@NeiRoze> > I found some strange things in my nginx log those days. Some queries' > $request_time are up to 40 seconds which their $upstream_response_time are > less than 1 second. $request_time is time since start of the request (first bytes read from client) till end of the request (last bytes are sent to client and logging happens) - so while the backend can generate the response in 1 second there might be clients with slow connections or even not waiting for the response (closing it) etc. rr From yaoweibin at gmail.com Thu Nov 22 05:30:30 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Thu, 22 Nov 2012 13:30:30 +0800 Subject: [Announce] Tengine-1.4.2 Message-ID: Hi folks, We are glad to announce that Tengine-1.4.2 development version has been released. You can either checkout the source code from github: https://github.com/taobao/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-1.4.2.tar.gz The changelog of this release is as follows: *) Feature: added the option '--dso-tool-path' to configure script, which can specify the installation path for the dso_tool script. (monadbobo) *) Feature: added a new variable '$unix_time', whose value is the current number of seconds since unix epoch time. (yaoweibin) *) Feature: added the 'make test' target to run test cases. (yaoweibin) *) Feature: now the sysguard module can be used in a location block. (lifeibo) *) Change: merged the changes from Nginx-1.2.4 and Nginx-1.2.5. (zhuzhaoyuan) *) Change: now checks the error codes of input body filters more carefully to avoid socket leaks. (cfsego) *) Bugfix: fixed the problem with directive limit_req can't handle 4 arguments. (monadbobo) Thanks to LazyZhu. *) Bugfix: fixed a compilation error with the file of sysinfo in Cygwin. (lifeibo) Thanks to Cao Peiran. *) Bugfix: now the installation script will copy the user_agent module's configuration. (monadbobo) Thanks to Jianbin Xiao. *) Bugfix: fixed the installation directory error with the DSO module when creating the RPM package. (monadbobo) Thanks to Jianbin Xiao and Ren Xiaolei. For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From howachen at gmail.com Thu Nov 22 05:51:15 2012 From: howachen at gmail.com (howard chen) Date: Thu, 22 Nov 2012 13:51:15 +0800 Subject: How to know if my nginx is in good health? Message-ID: Hi, I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around *2Mbps* and system load average is around *2 to 3.* I am wondering if this system is in good health for now, e.g. 1. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) 2. what is the average queue time for a given request to be served. I want to know because if my nginx is *cpu bounded* (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470 Any idea? -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Thu Nov 22 05:53:12 2012 From: edho at myconan.net (Edho Arief) Date: Thu, 22 Nov 2012 12:53:12 +0700 Subject: How to know if my nginx is in good health? In-Reply-To: References: Message-ID: On Thu, Nov 22, 2012 at 12:51 PM, howard chen wrote: > Hi, > > I am running a nginx on EC2 (m1.small) for SSL termination. > > I am using 2 workers on Ubuntu, with latest nginx (stable), the network > throughput is around 2Mbps and system load average is around 2 to 3. > > I am wondering if this system is in good health for now, > > e.g. > > what is the queue length (I know nginx can handle a lot of concurrent > request, but I mean before the request is being served, how many of them > need to wait before being served) > what is the average queue time for a given request to be served. > > I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will > need to upgrade to a faster instance. > > My current nginx status > > Active connections: 4076 > server accepts handled requests > 90664283 90664283 104117012 > Reading: 525 Writing: 81 Waiting: 3470 > > > > Any idea? > > why not just vmstat 1 for few hours and see the cpu usage? From tchao.china at yahoo.com Thu Nov 22 07:02:36 2012 From: tchao.china at yahoo.com (Tian Chao) Date: Wed, 21 Nov 2012 23:02:36 -0800 (PST) Subject: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi In-Reply-To: <9BE29FF9EBC54B728574945C22A168B0@NeiRoze> References: <1353474176.81973.YahooMailNeo@web163605.mail.gq1.yahoo.com> <1353556332.36931.YahooMailNeo@web163603.mail.gq1.yahoo.com> <9BE29FF9EBC54B728574945C22A168B0@NeiRoze> Message-ID: <1353567756.47786.YahooMailNeo@web163606.mail.gq1.yahoo.com> Hi rr, Thanks for your explain. Some queries' body sent is just about 1K bytes, however the??$request_time is up to 40 seconds, so i think the connection is too?slow. Is there a way for?diagnostic?what happened in that time for those slow queries?? Thanks, Chao ________________________________ From: Reinis Rozitis To: nginx at nginx.org Sent: Thursday, November 22, 2012 12:04 PM Subject: Re: request_time exstreamly larger than upstream_response_time for some queries when using fastcgi >? I found some strange things in my nginx log those days. Some queries' $request_time are up to 40 seconds which their $upstream_response_time are less than 1 second. $request_time is time since start of the request (first bytes read from client) till end of the request (last bytes are sent to client and logging happens) - so while the backend can generate the response in 1 second there might be clients with slow connections or even not waiting for the response (closing it) etc. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Nov 22 12:40:32 2012 From: nginx-forum at nginx.us (michela) Date: Thu, 22 Nov 2012 07:40:32 -0500 Subject: unsupported FastCGI protocol - Django Message-ID: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> I can no longer access my Django app via a nginx / fastcgi proxy on Centos 5.7 after upgrading to Django 1.4.2 Upgrading nginx via yum didn't solve the problem. The error message in the logs is: upstream sent unsupported FastCGI protocol version: 60 while reading response header from upstream There's a thread in the Russian list on this error. Can any Russian speakers shed any light on this? Thanks Michela Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233136,233136#msg-233136 From igor at sysoev.ru Thu Nov 22 13:04:36 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 22 Nov 2012 17:04:36 +0400 Subject: unsupported FastCGI protocol - Django In-Reply-To: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> References: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> Message-ID: <550DF09E-8795-4A09-92E8-0C27808F5F37@sysoev.ru> On Nov 22, 2012, at 16:40 , michela wrote: > I can no longer access my Django app via a nginx / fastcgi proxy on Centos > 5.7 after upgrading to Django 1.4.2 > > Upgrading nginx via yum didn't solve the problem. > > The error message in the logs is: > > upstream sent unsupported FastCGI protocol version: 60 while reading > response header from upstream 60 is a code of "<" character and probably begin of "". This means that Django response is HTTP but not is FastCGI protocol. -- Igor Sysoev http://nginx.com/support.html From howachen at gmail.com Thu Nov 22 15:48:36 2012 From: howachen at gmail.com (howard chen) Date: Thu, 22 Nov 2012 23:48:36 +0800 Subject: How to know if my nginx is in good health? In-Reply-To: References: Message-ID: Hi, On Thu, Nov 22, 2012 at 1:53 PM, Edho Arief wrote: > On Thu, Nov 22, 2012 at 12:51 PM, howard chen > wrote:why not just > > vmstat 1 > > for few hours and see the cpu usage? > > My biggest concern is not CPU load, as it tend to tell you nothing, e.g. what is the implication of decreasing CPU from 70% to 60%? I am more interested in real figures, e.g. length of queue of pending connections decrease from 10 to 5, average request time decrease from 100ms to 50ms for example.. Thanks anyway. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at migmedia.de Fri Nov 23 12:25:53 2012 From: nginx at migmedia.de (Micha Glave) Date: Fri, 23 Nov 2012 13:25:53 +0100 Subject: allow IPv6/Mask Message-ID: Hi I wan't to allow just the local-IPv4-Subnet. Something like this: listen *:80; allow 192.168.42.0/24; deny all; After switching to IPv6-Dual-Layer I tried this ... listen [::]:80; allow ::ffff:192.168.42.0/120; deny all; It doesn't work as expected. No one can see the page. The following works: listen [::]:80; allow ::ffff:c0a8:2a00/120; deny all; In my opinion there should be a warning at the second. But it fails silently. Should I post a bug-report for this? Micha -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at migmedia.de Fri Nov 23 12:30:32 2012 From: nginx at migmedia.de (Micha Glave) Date: Fri, 23 Nov 2012 13:30:32 +0100 Subject: allow IPv6/Mask In-Reply-To: References: Message-ID: My fault both 2. and 3. doesn't work. How is the correct notation? Micha 2012/11/23 Micha Glave > Hi > > I wan't to allow just the local-IPv4-Subnet. Something like this: > > listen *:80; > allow 192.168.42.0/24; > deny all; > > After switching to IPv6-Dual-Layer I tried this ... > > listen [::]:80; > allow ::ffff:192.168.42.0/120; > deny all; > > > It doesn't work as expected. No one can see the page. > The following works: > > listen [::]:80; > allow ::ffff:c0a8:2a00/120; > deny all; > > In my opinion there should be a warning at the second. But it fails > silently. > > Should I post a bug-report for this? > > Micha > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Fri Nov 23 12:36:49 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 23 Nov 2012 16:36:49 +0400 Subject: allow IPv6/Mask In-Reply-To: References: Message-ID: <5C446589-7AC0-4BFC-BBC1-E1435255E677@sysoev.ru> On Nov 23, 2012, at 16:30 , Micha Glave wrote: > My fault both 2. and 3. doesn't work. > > How is the correct notation? allow 192.168.42.0/24; nginx tests both IPv4 and IPv4 mapped to IPv6 addresses with this single rule. -- Igor Sysoev http://nginx.com/support.html > Micha > > > 2012/11/23 Micha Glave > Hi > > I wan't to allow just the local-IPv4-Subnet. Something like this: > > listen *:80; > allow 192.168.42.0/24; > deny all; > > After switching to IPv6-Dual-Layer I tried this ... > > listen [::]:80; > allow ::ffff:192.168.42.0/120; > deny all; > > > It doesn't work as expected. No one can see the page. > The following works: > > listen [::]:80; > allow ::ffff:c0a8:2a00/120; > deny all; > > In my opinion there should be a warning at the second. But it fails silently. > > Should I post a bug-report for this? > > Micha -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Nov 23 14:04:21 2012 From: nginx-forum at nginx.us (greenfish29) Date: Fri, 23 Nov 2012 09:04:21 -0500 Subject: Nginx and php-fpm on WIndows Message-ID: <02f9f0e4320996559ea0a36969b77909.NginxMailingListEnglish@forum.nginx.org> Hello, I try nginx on Windows 7 with php-fpm for development on local machine. Nginx configuration is set to cooperate with php-fpm. But when I try show my index.php from document root directory, I see only blank page. Source of this page is exactly the php code in index.php file. No error is showed, no errors in logs. When I try this on Mac, all works. What can be wrong here? Here is nginx.conf file. Conf file php-fpm.conf don't exists on windows, but php-fpm runs without errors. Thank you. #user nobody; worker_processes 1; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root D:/Web/www; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root D:/Web/www; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233185,233185#msg-233185 From nginx-forum at nginx.us Fri Nov 23 16:58:26 2012 From: nginx-forum at nginx.us (Wouter van der Schagt) Date: Fri, 23 Nov 2012 11:58:26 -0500 Subject: Adding cachekey to log_format directive In-Reply-To: <4417a5b6fd1cb6882a93dea456dd3179.NginxMailingListEnglish@forum.nginx.org> References: <16655e3cf88aca050a05ec8f407c421d.NginxMailingListEnglish@forum.nginx.org> <4417a5b6fd1cb6882a93dea456dd3179.NginxMailingListEnglish@forum.nginx.org> Message-ID: <75f589af51f652c2312e4b12ad8e9f2a.NginxMailingListEnglish@forum.nginx.org> Hi Graham, Thank you for your reply, the r->cache->file.name was particularly useful for this use-case, thank you for the insight! Sincerely, - Wouter van der Schagt Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232747,233189#msg-233189 From nginx-forum at nginx.us Fri Nov 23 18:35:28 2012 From: nginx-forum at nginx.us (Wireless) Date: Fri, 23 Nov 2012 13:35:28 -0500 Subject: unsupported FastCGI protocol - Django In-Reply-To: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> References: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> Message-ID: Use of uWSGI protocol is much more common for Django. http://projects.unbit.it/uwsgi/ Both NGINX and Django natively support this protocol. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233136,233191#msg-233191 From vbart at nginx.com Fri Nov 23 19:33:13 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 23 Nov 2012 23:33:13 +0400 Subject: unsupported FastCGI protocol - Django In-Reply-To: References: <902d55e6e1044792fd0fd5fd4c245f88.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211232333.13511.vbart@nginx.com> On Friday 23 November 2012 22:35:28 Wireless wrote: > Use of uWSGI protocol is much more common for Django. > http://projects.unbit.it/uwsgi/ > > Both NGINX and Django natively support this protocol. > Django doesn't support uwsgi. You probably have confused it with wsgi. It's completely different thing. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From howachen at gmail.com Sat Nov 24 05:58:16 2012 From: howachen at gmail.com (howard chen) Date: Sat, 24 Nov 2012 13:58:16 +0800 Subject: Setting header for fastcgi response Message-ID: Hi, In my nginx config, I have a virtual path (/foo), e.g. location = /foo { expires 1h; add_header Cache-Control "public"; } This path does not exist, and it goes into the fcgi using location / { try_files $uri $uri/ /index.php?$args; } location ~* \.php$ { .. } The problem is the 1st location block is adding 404 to every response even I can route the request to the backend. E.g. curl -v 'http://www.example.com/foo' < HTTP/1.1 404 Not Found bar <-- actual output from backend, but why 404? Any idea? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Nov 24 06:09:19 2012 From: nginx-forum at nginx.us (Wireless) Date: Sat, 24 Nov 2012 01:09:19 -0500 Subject: unsupported FastCGI protocol - Django In-Reply-To: <201211232333.13511.vbart@nginx.com> References: <201211232333.13511.vbart@nginx.com> Message-ID: <47fea771c6df40f2bae2e27409d103f8.NginxMailingListEnglish@forum.nginx.org> "Django doesn't support uwsgi" https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/uwsgi/#how-to-use-django-with-uwsgi Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233136,233197#msg-233197 From francis at daoine.org Sat Nov 24 10:48:26 2012 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Nov 2012 10:48:26 +0000 Subject: Setting header for fastcgi response In-Reply-To: References: Message-ID: <20121124104826.GR18139@craic.sysops.org> On Sat, Nov 24, 2012 at 01:58:16PM +0800, howard chen wrote: Hi there, in nginx, one request is handled in one location. > location = /foo { > expires 1h; > add_header Cache-Control "public"; > } Your request for /foo should be handled in that location. It will, depending on the configuration that you haven't shown, try to serve the file /usr/local/nginx/html/foo with a few extra http headers. > This path does not exist, and it goes into the fcgi using > > location / { > try_files $uri $uri/ /index.php?$args; > } The configuration you have shown does not indicate that that will happen. (Unless, perhaps, you have something like an error_page directive in place.) > curl -v 'http://www.example.com/foo' > > < HTTP/1.1 404 Not Found > bar <-- actual output from backend, but why 404? > > Any idea? Can you provide a config file that is enough to demonstrate what you are reporting? Thanks, f -- Francis Daly francis at daoine.org From vbart at nginx.com Sat Nov 24 14:31:44 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 24 Nov 2012 18:31:44 +0400 Subject: unsupported FastCGI protocol - Django In-Reply-To: <47fea771c6df40f2bae2e27409d103f8.NginxMailingListEnglish@forum.nginx.org> References: <201211232333.13511.vbart@nginx.com> <47fea771c6df40f2bae2e27409d103f8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <201211241831.44981.vbart@nginx.com> On Saturday 24 November 2012 10:09:19 Wireless wrote: > "Django doesn't support uwsgi" > > https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/uwsgi/#how-to-u > se-django-with-uwsgi > According to your link: "Configuring and starting the uWSGI server for Django". It doesn't mean that django supports the uwsgi protocol natively (please, do not confuse the uwsgi protocol with the uWSGI server). To use FastCGI you need Flup or any other application server that talks with Django by WSGI and with nginx by FastCGI. To use uwsgi you need uWSGI. So there's no difference. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From nerijus.skarzauskas at gmail.com Sun Nov 25 00:51:56 2012 From: nerijus.skarzauskas at gmail.com (=?ISO-8859-13?Q?Nerijus_Skar=FEauskas?=) Date: Sun, 25 Nov 2012 02:51:56 +0200 Subject: Fwd: help with apache rewrite convert to nginx In-Reply-To: References: Message-ID: Hello, need help with convert apache rewrite rules to nginx. My .htaccess: RewriteEngine on RewriteBase / ErrorDocument 404 / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^naujienos+ news.php/$1 [L] RewriteRule ^skelbimai/?$ index.php?p=20 [nc] RewriteRule ^skelbimai?$ index.php?p=20 [nc] RewriteRule ^reklama_svetaineje/?$ index.php?p=25 [nc] RewriteRule ^reklama_svetaineje?$ index.php?p=25 [nc] RewriteRule ^reklama_laikrastyje/?$ index.php?p=15 [nc] RewriteRule ^reklama_laikrastyje?$ index.php?p=15 [nc] RewriteRule ^prenumeratos_uzsakymas/?$ /prenumerata.php [nc] RewriteRule ^prenumeratos_uzsakymas?$ /prenumerata.php [nc] RewriteRule ^newprenum/?$ /pren_all.php [nc] RewriteRule ^newprenum?$ /pren_all.php [nc] RewriteRule ^negaunantiems_laiku/?$ index.php?p=24 [nc] RewriteRule ^negaunantiems_laiku?$ index.php?p=24 [nc] RewriteRule ^naujienos/p(.*)$ /news.php?pg=$1 [nc] RewriteRule ^naujienos/?$ /news.php [nc] RewriteRule ^naujienos?$ /news.php [nc] RewriteRule ^lics/?$ /Lics.php [nc] RewriteRule ^lics?$ /Lics.php [nc] RewriteRule ^kontaktai/?$ index.php?p=18 [nc] RewriteRule ^kontaktai?$ index.php?p=18 [nc] RewriteRule ^atsisiusk_dienrasti/?$ /newspaper.php [nc] RewriteRule ^atsisiusk_dienrasti?$ /newspaper.php [nc] RewriteRule ^apie_leidejus/?$ /leidejai.php [nc] RewriteRule ^apie_leidejus?$ /leidejai.php [nc] RewriteRule ^akcijos/?$ index.php?p=13 [nc] RewriteRule ^akcijos?$ index.php?p=13 [nc] -- -- pagarbiai hostname _ Nerijus Skar?auskas Kompetencija: _ tarnybini? sto?i? konfig?ravimas ir prie?i?ra; _ sud?tingi IT sprendimai; _ kompiuteri? remontas ir prie?i?ra mobile _ +370 655 75577 e-mail _ nerijus at insecure.lt skype _ nerijus.skarzauskas jabber _ nerka at akl.lt ... | -------------- next part -------------- An HTML attachment was scrubbed... URL: From nerijus.skarzauskas at gmail.com Sun Nov 25 06:28:48 2012 From: nerijus.skarzauskas at gmail.com (=?ISO-8859-13?Q?Nerijus_Skar=FEauskas?=) Date: Sun, 25 Nov 2012 08:28:48 +0200 Subject: Need help convert apache rewrite rules to nginx Message-ID: Hello, need help with convert apache rewrite rules to nginx. My .htaccess: RewriteEngine on RewriteBase / ErrorDocument 404 / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^naujienos+ news.php/$1 [L] RewriteRule ^skelbimai/?$ index.php?p=20 [nc] RewriteRule ^skelbimai?$ index.php?p=20 [nc] RewriteRule ^reklama_svetaineje/?$ index.php?p=25 [nc] RewriteRule ^reklama_svetaineje?$ index.php?p=25 [nc] RewriteRule ^reklama_laikrastyje/?$ index.php?p=15 [nc] RewriteRule ^reklama_laikrastyje?$ index.php?p=15 [nc] RewriteRule ^prenumeratos_uzsakymas/?$ /prenumerata.php [nc] RewriteRule ^prenumeratos_uzsakymas?$ /prenumerata.php [nc] RewriteRule ^newprenum/?$ /pren_all.php [nc] RewriteRule ^newprenum?$ /pren_all.php [nc] RewriteRule ^negaunantiems_laiku/?$ index.php?p=24 [nc] RewriteRule ^negaunantiems_laiku?$ index.php?p=24 [nc] RewriteRule ^naujienos/p(.*)$ /news.php?pg=$1 [nc] RewriteRule ^naujienos/?$ /news.php [nc] RewriteRule ^naujienos?$ /news.php [nc] RewriteRule ^lics/?$ /Lics.php [nc] RewriteRule ^lics?$ /Lics.php [nc] RewriteRule ^kontaktai/?$ index.php?p=18 [nc] RewriteRule ^kontaktai?$ index.php?p=18 [nc] RewriteRule ^atsisiusk_dienrasti/?$ /newspaper.php [nc] RewriteRule ^atsisiusk_dienrasti?$ /newspaper.php [nc] RewriteRule ^apie_leidejus/?$ /leidejai.php [nc] RewriteRule ^apie_leidejus?$ /leidejai.php [nc] RewriteRule ^akcijos/?$ index.php?p=13 [nc] RewriteRule ^akcijos?$ index.php?p=13 [nc] -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Sun Nov 25 06:51:09 2012 From: edho at myconan.net (Edho Arief) Date: Sun, 25 Nov 2012 13:51:09 +0700 Subject: Need help convert apache rewrite rules to nginx In-Reply-To: References: Message-ID: 2012/11/25 Nerijus Skar?auskas : > Hello, > need help with convert apache rewrite rules to nginx. My .htaccess: > > > RewriteEngine on > RewriteBase / > ErrorDocument 404 / > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule ^naujienos+ news.php/$1 [L] > RewriteRule ^skelbimai/?$ index.php?p=20 [nc] > RewriteRule ^skelbimai?$ index.php?p=20 [nc] > RewriteRule ^reklama_svetaineje/?$ index.php?p=25 [nc] > RewriteRule ^reklama_svetaineje?$ index.php?p=25 [nc] > RewriteRule ^reklama_laikrastyje/?$ index.php?p=15 [nc] > RewriteRule ^reklama_laikrastyje?$ index.php?p=15 [nc] > RewriteRule ^prenumeratos_uzsakymas/?$ /prenumerata.php [nc] > RewriteRule ^prenumeratos_uzsakymas?$ /prenumerata.php [nc] > RewriteRule ^newprenum/?$ /pren_all.php [nc] > RewriteRule ^newprenum?$ /pren_all.php [nc] > RewriteRule ^negaunantiems_laiku/?$ index.php?p=24 [nc] > RewriteRule ^negaunantiems_laiku?$ index.php?p=24 [nc] > RewriteRule ^naujienos/p(.*)$ /news.php?pg=$1 [nc] > RewriteRule ^naujienos/?$ /news.php [nc] > RewriteRule ^naujienos?$ /news.php [nc] > RewriteRule ^lics/?$ /Lics.php [nc] > RewriteRule ^lics?$ /Lics.php [nc] > RewriteRule ^kontaktai/?$ index.php?p=18 [nc] > RewriteRule ^kontaktai?$ index.php?p=18 [nc] > RewriteRule ^atsisiusk_dienrasti/?$ /newspaper.php [nc] > RewriteRule ^atsisiusk_dienrasti?$ /newspaper.php [nc] > RewriteRule ^apie_leidejus/?$ /leidejai.php [nc] > RewriteRule ^apie_leidejus?$ /leidejai.php [nc] > RewriteRule ^akcijos/?$ index.php?p=13 [nc] > RewriteRule ^akcijos?$ index.php?p=13 [nc] > > > error_page 404 /; try_files $uri $uri/ @rewrites; location @rewrites { ...... } From nginx-forum at nginx.us Sun Nov 25 14:52:43 2012 From: nginx-forum at nginx.us (gus0253) Date: Sun, 25 Nov 2012 09:52:43 -0500 Subject: custom 502 error page is ignored Message-ID: <0d84b4f23c1496559fa88d27142e371e.NginxMailingListEnglish@forum.nginx.org> Dear all, I'm currently wondering why nginx ignores my custom 502 Bad Gateway page and instead serves the internal 502 error page. I'm using nginx version 1.2.4 and php-fpm 5.4.8 The behaivior occures every time and is reproducible: If I stop php-fpm and request an existing php file my custom 502 error page is shown, so everything is fine. But if I request a non existing php file nginx shows me the nginx internal 502 error page. I don't know why my custom 502 error page is ignored, because it's a static html file. Do you have any tipps for me? Part from the debug log: 2012/11/25 15:29:09 [debug] 10777#0: *26 http script var: "/index2.php" 2012/11/25 15:29:09 [debug] 10777#0: *26 trying to use file: "/index2.php" "/usr/share/nginx/html/index2.php" 2012/11/25 15:29:09 [debug] 10777#0: *26 trying to use file: "=404" "/usr/share/nginx/html=404" 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 404, "/index2.php?" a:1, c:1 2012/11/25 15:29:09 [debug] 10777#0: *26 http special response: 404, "/index2.php?" 2012/11/25 15:29:09 [debug] 10777#0: *26 internal redirect: "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 rewrite phase: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "/" 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "502.html" 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "404.html" 2012/11/25 15:29:09 [debug] 10777#0: *26 using configuration "=/404.html" ... http run request: "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream check client, write event:1, "/404.html" 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream recv(): -1 (11: Resource temporarily unavailable) 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream request: "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream process header 2012/11/25 15:29:09 [error] 10777#0: *26 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.2.2, server: localhost, request: "GET /index2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1" 2012/11/25 15:29:09 [debug] 10777#0: *26 http next upstream, 2 2012/11/25 15:29:09 [debug] 10777#0: *26 free rr peer 1 4 2012/11/25 15:29:09 [debug] 10777#0: *26 finalize http upstream request: 502 2012/11/25 15:29:09 [debug] 10777#0: *26 finalize http fastcgi request 2012/11/25 15:29:09 [debug] 10777#0: *26 free rr peer 0 0 2012/11/25 15:29:09 [debug] 10777#0: *26 close http upstream connection: 8 2012/11/25 15:29:09 [debug] 10777#0: *26 free: 0000000001F7C560, unused: 48 2012/11/25 15:29:09 [debug] 10777#0: *26 event timer del: 8: 1353853809503 2012/11/25 15:29:09 [debug] 10777#0: *26 reusable connection: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 502, "/404.html?" a:1, c:1 2012/11/25 15:29:09 [debug] 10777#0: *26 http special response: 502, "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http set discard body 2012/11/25 15:29:09 [debug] 10777#0: *26 HTTP/1.1 502 Bad Gateway Server: nginx/1.2.4 Date: Sun, 25 Nov 2012 14:29:09 GMT Content-Type: text/html Content-Length: 172 Connection: keep-alive 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:1 f:0 0000000001F385A8, pos 0000000001F385A8, size: 156 file: 0, size: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter: l:0 f:0 s:156 2012/11/25 15:29:09 [debug] 10777#0: *26 http output filter "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http copy filter: "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http postpone filter "/404.html?" 0000000001F38778 2012/11/25 15:29:09 [debug] 10777#0: *26 write old buf t:1 f:0 0000000001F385A8, pos 0000000001F385A8, size: 156 file: 0, size: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:0 f:0 0000000000000000, pos 000000000069B3E0, size: 120 file: 0, size: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:0 f:0 0000000000000000, pos 000000000069A1A0, size: 52 file: 0, size: 0 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter: l:1 f:0 s:328 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter limit 0 2012/11/25 15:29:09 [debug] 10777#0: *26 writev: 328 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter 0000000000000000 2012/11/25 15:29:09 [debug] 10777#0: *26 http copy filter: 0 "/404.html?" 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 0, "/404.html?" a:1, c:1 2012/11/25 15:29:09 [debug] 10777#0: *26 set http keepalive handler 2012/11/25 15:29:09 [debug] 10777#0: *26 http close request 2012/11/25 15:29:09 [debug] 10777#0: *26 http log handler Example configuration: upstream php_fpm { server 127.0.0.1:9000; } server { listen 80; server_name localhost; root /usr/share/nginx/html; error_log /var/log/nginx/default.log debug; location / { index index.php index.html index.htm; } error_page 404 /404.html; error_page 502 /502.html; location = /404.html { fastcgi_intercept_errors on; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/404.php; fastcgi_index index.php; fastcgi_pass php_fpm; } location = /502.html { } location ~* \.php$ { try_files $uri =404; fastcgi_intercept_errors on; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_index index.php; fastcgi_pass php_fpm; } } Best regards, Gus Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233209,233209#msg-233209 From nginx-forum at nginx.us Mon Nov 26 08:24:20 2012 From: nginx-forum at nginx.us (hide) Date: Mon, 26 Nov 2012 03:24:20 -0500 Subject: Turn basic authentication on and off for specific HTTP user agent Message-ID: <20e5722ddd757472f91e25a021509d0e.NginxMailingListEnglish@forum.nginx.org> Hello All! Is it possible to turn authentication on and off for a specific user agent in some location? When I configure the following location /specloc/ { if ($http_user_agent ~ MSIE) { auth_basic "private area"; auth_basic_user_file /etc/nginx/htpasswd; } #... } my "nginx -t" prints nginx: [emerg] "auth_basic" directive is not allowed here in /etc/nginx/nginx.conf:75 nginx: configuration file /etc/nginx/nginx.conf test failed Thank you if you answer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233214,233214#msg-233214 From mdounin at mdounin.ru Mon Nov 26 08:36:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 26 Nov 2012 12:36:24 +0400 Subject: custom 502 error page is ignored In-Reply-To: <0d84b4f23c1496559fa88d27142e371e.NginxMailingListEnglish@forum.nginx.org> References: <0d84b4f23c1496559fa88d27142e371e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121126083624.GE40452@mdounin.ru> Hello! On Sun, Nov 25, 2012 at 09:52:43AM -0500, gus0253 wrote: > Dear all, > > I'm currently wondering why nginx ignores my custom 502 Bad Gateway page and > instead serves the internal 502 error page. > I'm using nginx version 1.2.4 and php-fpm 5.4.8 > > The behaivior occures every time and is reproducible: > If I stop php-fpm and request an existing php file my custom 502 error page > is shown, so everything is fine. > But if I request a non existing php file nginx shows me the nginx internal > 502 error page. > > I don't know why my custom 502 error page is ignored, because it's a static > html file. > Do you have any tipps for me? [...] The 502 error happens while processing 404 error, and by default nginx won't do another error_page redirection after first one. If you really want nginx to handle multiple error_page redirections, use the "recursive_error_pages" directive, see here: http://nginx.org/r/recursive_error_pages You should be careful to avoid loops though. -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Mon Nov 26 08:41:39 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 26 Nov 2012 12:41:39 +0400 Subject: custom 502 error page is ignored In-Reply-To: <0d84b4f23c1496559fa88d27142e371e.NginxMailingListEnglish@forum.nginx.org> References: <0d84b4f23c1496559fa88d27142e371e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121126084139.GB85340@lo0.su> On Sun, Nov 25, 2012 at 09:52:43AM -0500, gus0253 wrote: > Dear all, > > I'm currently wondering why nginx ignores my custom 502 Bad Gateway page and > instead serves the internal 502 error page. > I'm using nginx version 1.2.4 and php-fpm 5.4.8 > > The behaivior occures every time and is reproducible: > If I stop php-fpm and request an existing php file my custom 502 error page > is shown, so everything is fine. > But if I request a non existing php file nginx shows me the nginx internal > 502 error page. > > I don't know why my custom 502 error page is ignored, because it's a static > html file. > Do you have any tipps for me? Querying index2.php results in error 404 handled by location "/404.html" which in turn results in (recursive) error 502 if php-fpm is stopped. http://nginx.org/r/recursive_error_pages (Turn it on in the location "~* \.php$".) > Part from the debug log: > 2012/11/25 15:29:09 [debug] 10777#0: *26 http script var: "/index2.php" > 2012/11/25 15:29:09 [debug] 10777#0: *26 trying to use file: "/index2.php" > "/usr/share/nginx/html/index2.php" > 2012/11/25 15:29:09 [debug] 10777#0: *26 trying to use file: "=404" > "/usr/share/nginx/html=404" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 404, > "/index2.php?" a:1, c:1 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http special response: 404, > "/index2.php?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 internal redirect: "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 rewrite phase: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "/" > 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "502.html" > 2012/11/25 15:29:09 [debug] 10777#0: *26 test location: "404.html" > 2012/11/25 15:29:09 [debug] 10777#0: *26 using configuration "=/404.html" > ... > http run request: "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream check client, write > event:1, "/404.html" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream recv(): -1 (11: > Resource temporarily unavailable) > 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream request: > "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http upstream process header > 2012/11/25 15:29:09 [error] 10777#0: *26 connect() failed (111: Connection > refused) while connecting to upstream, client: 10.0.2.2, server: localhost, > request: "GET /index2.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", > host: "127.0.0.1" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http next upstream, 2 > 2012/11/25 15:29:09 [debug] 10777#0: *26 free rr peer 1 4 > 2012/11/25 15:29:09 [debug] 10777#0: *26 finalize http upstream request: > 502 > 2012/11/25 15:29:09 [debug] 10777#0: *26 finalize http fastcgi request > 2012/11/25 15:29:09 [debug] 10777#0: *26 free rr peer 0 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 close http upstream connection: 8 > 2012/11/25 15:29:09 [debug] 10777#0: *26 free: 0000000001F7C560, unused: 48 > 2012/11/25 15:29:09 [debug] 10777#0: *26 event timer del: 8: 1353853809503 > 2012/11/25 15:29:09 [debug] 10777#0: *26 reusable connection: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 502, > "/404.html?" a:1, c:1 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http special response: 502, > "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http set discard body > 2012/11/25 15:29:09 [debug] 10777#0: *26 HTTP/1.1 502 Bad Gateway > Server: nginx/1.2.4 > Date: Sun, 25 Nov 2012 14:29:09 GMT > Content-Type: text/html > Content-Length: 172 > Connection: keep-alive > > 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:1 f:0 > 0000000001F385A8, pos 0000000001F385A8, size: 156 file: 0, size: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter: l:0 f:0 s:156 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http output filter "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http copy filter: "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http postpone filter "/404.html?" > 0000000001F38778 > 2012/11/25 15:29:09 [debug] 10777#0: *26 write old buf t:1 f:0 > 0000000001F385A8, pos 0000000001F385A8, size: 156 file: 0, size: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:0 f:0 > 0000000000000000, pos 000000000069B3E0, size: 120 file: 0, size: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 write new buf t:0 f:0 > 0000000000000000, pos 000000000069A1A0, size: 52 file: 0, size: 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter: l:1 f:0 s:328 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter limit 0 > 2012/11/25 15:29:09 [debug] 10777#0: *26 writev: 328 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http write filter 0000000000000000 > 2012/11/25 15:29:09 [debug] 10777#0: *26 http copy filter: 0 "/404.html?" > 2012/11/25 15:29:09 [debug] 10777#0: *26 http finalize request: 0, > "/404.html?" a:1, c:1 > 2012/11/25 15:29:09 [debug] 10777#0: *26 set http keepalive handler > 2012/11/25 15:29:09 [debug] 10777#0: *26 http close request > 2012/11/25 15:29:09 [debug] 10777#0: *26 http log handler > > > Example configuration: > > upstream php_fpm { > server 127.0.0.1:9000; > } > > server { > listen 80; > server_name localhost; > root /usr/share/nginx/html; > > error_log /var/log/nginx/default.log debug; > > location / { > index index.php index.html index.htm; > } > > error_page 404 /404.html; > error_page 502 /502.html; > location = /404.html { > fastcgi_intercept_errors on; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root/404.php; > fastcgi_index index.php; > fastcgi_pass php_fpm; > } > location = /502.html { > } > location ~* \.php$ { > try_files $uri =404; > fastcgi_intercept_errors on; > include /etc/nginx/fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_index index.php; > fastcgi_pass php_fpm; > } > } > > Best regards, > Gus From iugacristian at ymail.com Mon Nov 26 11:57:05 2012 From: iugacristian at ymail.com (Cristian Iuga) Date: Mon, 26 Nov 2012 03:57:05 -0800 (PST) Subject: "/stats" question (webalizer) Message-ID: <1353931025563-7582653.post@n2.nabble.com> Hello, I have configured my web server as shown here: http://www.howtoforge.com/perfect-server-centos-6.3-x86_64-nginx-courier-ispconfig-3 My question is related to domain.com/stats page. 1. (How) Can I disable it not to show Webalizer stats? 2. (How) Can I make it show the stats to a different url know only by me? 3. (How) Can I let my CodeIgniter controller use this url (domain.com/stats)? Now I see everything fine except the domain.com/stats page which shows me Webalizer stats instead of my CodeIgniter view. Can anyone help? Cristi -- View this message in context: http://nginx.2469901.n2.nabble.com/stats-question-webalizer-tp7582653.html Sent from the nginx mailing list archive at Nabble.com. From nginx-forum at nginx.us Mon Nov 26 12:16:32 2012 From: nginx-forum at nginx.us (mauro76) Date: Mon, 26 Nov 2012 07:16:32 -0500 Subject: proxy_next_upstream, only "connect" timeout?, try 2 In-Reply-To: <20120615102935.GP31671@mdounin.ru> References: <20120615102935.GP31671@mdounin.ru> Message-ID: I'm interested on the subject. On my current set up, I'm using the timeout option to make sure the request is passed to the next upstream if the first server is down. If the request is hanging, it's possible the request coming in is a bad request, which could slow down the full cluster. I would like to go to the next upstream only on connection timeout. I wonder if you could provide two additional options, "read_timeout" and "connect_timeout", leaving "timeout" unchanged. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,227552,233222#msg-233222 From nginx-forum at nginx.us Mon Nov 26 21:50:20 2012 From: nginx-forum at nginx.us (hristoc) Date: Mon, 26 Nov 2012 16:50:20 -0500 Subject: How to bind nginx to ipv4 and ipv6 interface ? Message-ID: Hello, any one can tell me what is wrong on my nginx 1.2.5 version compied with ipv6 suppot ? I try to start nginx on both ipv4 and ipv6. I read on internet if I put in my config file: listen [::]:80; is enought and nginx will bind on both ipv4 and ipv6 interfaces or even if I compile nginx with ipv6 support is enought and on listen 80; will bind to ip4 and ip6, but i receive follow error: [emerg] 15728#0: bind() to [::]:80 failed (98: Address already in use Any hints how to start nginx on both intefaces ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233246,233246#msg-233246 From nginx-forum at nginx.us Tue Nov 27 02:30:51 2012 From: nginx-forum at nginx.us (bar_gra) Date: Mon, 26 Nov 2012 21:30:51 -0500 Subject: Problem with "if" and "$remote_addr" Message-ID: <55379fc16d5226c33a7b538519babe83.NginxMailingListEnglish@forum.nginx.org> Hello, I have strange problem. If i enter phpinfo file i get: _SERVER["REMOTE_ADDR"] 80.239.242.1 _SERVER["HTTP_X_FORWARDED_FOR"] 79.173.35.1 So my IP is 80.239.242.1 I want to block access to location from other ips then ip in url. For example location ~ ^/([0-9\.]+)/(.*?)$ { if ($remote_addr != $1) { return 404; } } But it doesn't work correctly :( 1.) I will see value of variable $remote_addr, and it is correct result. The result is: 80.239.242.1 location ~ ^/([0-9\.]+)/(.*?)$ { echo $remote_addr; } 2.) I will check that $remote_addr is excaly parametr 1, and i get for the url "/80.239.242.1/1" location ~ ^/([0-9\.]+)/(.*?)$ { if ($remote_addr != $1) { echo "$remote_addr != $1"; } } And the result is "79.173.35.1 != 80.239.242.1" The question: Why variable $remote_addr change value when i use "if" Sorry for my english. Thanks for reply Bart Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233248,233248#msg-233248 From bart.vandeenen at spilgames.com Tue Nov 27 10:28:40 2012 From: bart.vandeenen at spilgames.com (Bart van Deenen) Date: Tue, 27 Nov 2012 11:28:40 +0100 Subject: nginx+lua reverse proxy empty body Message-ID: <50B495D8.8020106@spilgames.com> Hi all I'm trying to do on-the-fly changes on the pages of a site using lua. I've set up a nginx reverse proxy, and some lua code to do the replacements, and I notice irreproducable (timing ?) situations where the proxied body that is passed to lua is empty. I know my code works in some cases, but I can't figure out what makes that it's not reliable. nginx.conf: worker_processes 1; error_log logs/error.log debug; events { worker_connections 1024; } http { server { client_body_in_single_buffer on; listen 9001; location / { proxy_pass http://www.spelletjes.nl:80; proxy_set_header X-Real-IP $remote_addr; body_filter_by_lua ' if ngx.arg[1] ~= "" then ngx.arg[1] = string.gsub(ngx.arg[1], "Speel", "NGINX") else print(ngx.var.uri .. " has empty body" .. ngx.arg[1]) end '; } } } The problem I have basically that the ngx.arg[1] is an empty string (sometimes, timing dependent?) on url's that are definitely not empty. So what am I doing wrong? I am using openresty 1.2.4.9 (nginx 1.2.4 + ngx_lua-0.7.5) Typical message in logs/error.log: 67 2012/11/26 14:53:59 [notice] 19291#0: *55 [lua] [string "body_filter_by_lua"]:7: / has empty body while sending to client, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://212.72.60.220:80/ ", host: "localhost:9001" Thanks for answers Bart From dewanggaba at gmail.com Tue Nov 27 12:08:21 2012 From: dewanggaba at gmail.com (antituhan) Date: Tue, 27 Nov 2012 04:08:21 -0800 (PST) Subject: Is it possible using multiple directive on different root location? (Without Symlinks) In-Reply-To: <20120511080357.GH457@craic.sysops.org> References: <1335864986389-7516384.post@n2.nabble.com> <1335896925.4775.25.camel@portable-evil> <1335925217815-7518776.post@n2.nabble.com> <1335934594.4775.45.camel@portable-evil> <1336049994106-7523526.post@n2.nabble.com> <20120503224309.GB11895@craic.sysops.org> <1336712290522-7549205.post@n2.nabble.com> <20120511080357.GH457@craic.sysops.org> Message-ID: <1354018101825-7582658.post@n2.nabble.com> Hello f, Sorry for longtime no reply to this thread, because of another project. I want to continue this case, and I try the directive like this http://fpaste.org/n8FS/ I got the reference from here http://serverfault.com/questions/415538/nginx-alias-and-regex Change $document_root$fastcgi_script_name into $document_root$1, but I still got 403 Forbidden when access http://static.antituhan.com/test/tehbotol.php ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/Is-it-possible-using-multiple-directive-on-different-root-location-Without-Symlinks-tp7516384p7582658.html Sent from the nginx mailing list archive at Nabble.com. From radecki.rafal at gmail.com Tue Nov 27 12:57:18 2012 From: radecki.rafal at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Radecki?=) Date: Tue, 27 Nov 2012 13:57:18 +0100 Subject: Syslog-ng? Message-ID: Hi all. I am currently deploying an evnironment with nginx webservers. I would like to store logs centrally with syslog-ng. I would like to make it as efficient as it can be, I've found two "howtos": http://pastebin.com/PCYtve9s http://grokbase.com/t/centos/centos/113ryagfqe/remote-logging-nginx-or-other-non-syslog-enabled-stuff (Ilyas's responses) which use fifos. What do you think about using fifos? Is it more efficient than logging to/through a file? What about sockets? Are there any other alternatives? What are your experiences? Best regards, Rafal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From citrin at citrin.ru Tue Nov 27 13:14:14 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Tue, 27 Nov 2012 17:14:14 +0400 Subject: Syslog-ng? In-Reply-To: References: Message-ID: <50B4BCA6.8030100@citrin.ru> On 11/27/12 16:57, Rafa? Radecki wrote: > I am currently deploying an evnironment with nginx webservers. I would like to > store logs centrally with syslog-ng. I would like to make it as efficient as it > can be, I've found two "howtos": > http://pastebin.com/PCYtve9s > http://grokbase.com/t/centos/centos/113ryagfqe/remote-logging-nginx-or-other-non-syslog-enabled-stuff (Ilyas's > responses) > which use fifos. > > What do you think about using fifos? Is it more efficient than logging > to/through a file? What about sockets? Are there any other alternatives? What > are your experiences? Writing logs from nginx to fifo is bad idea. If you need efficient logging on loaded server write log to file. 1. file can be rotated as often as need, and moved to dir accessible by rsync. 2. files from this dir can be rsync-ed to central server. If your need near-real time logs on central server (several minutes lag is not acceptable) try add something like this to syslog-nd config: source nginx_access { program("tail -F -n0 /var/log/nginx/access.log"); }; But it will be less efficient and less reliable - some messages can be lost at syslog-ng restarts, central log server reboot e. t. c. -- Anton Yuzhaninov From r at roze.lv Tue Nov 27 13:30:31 2012 From: r at roze.lv (Reinis Rozitis) Date: Tue, 27 Nov 2012 15:30:31 +0200 Subject: Syslog-ng? In-Reply-To: References: Message-ID: You can try http://www.grid.net.ru/nginx/udplog.en.html (no idea if that still compiles with the newer nginx versions). You might also contact Valery Kholodkov directly - he has written few new modules http://www.nginxguts.com/2012/08/better-logging-for-nginx/ (then again those links provided are broken atm so not sure about the status of those). rr From radecki.rafal at gmail.com Tue Nov 27 13:39:38 2012 From: radecki.rafal at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Radecki?=) Date: Tue, 27 Nov 2012 14:39:38 +0100 Subject: Syslog-ng? In-Reply-To: <50B4BCA6.8030100@citrin.ru> References: <50B4BCA6.8030100@citrin.ru> Message-ID: Why is logging to fifo a bad idea? Best regards, Rafal. 2012/11/27 Anton Yuzhaninov > On 11/27/12 16:57, Rafa? Radecki wrote: > >> I am currently deploying an evnironment with nginx webservers. I would >> like to >> store logs centrally with syslog-ng. I would like to make it as efficient >> as it >> can be, I've found two "howtos": >> http://pastebin.com/PCYtve9s >> http://grokbase.com/t/centos/**centos/113ryagfqe/remote-** >> logging-nginx-or-other-non-**syslog-enabled-stuff(Ilyas's >> responses) >> which use fifos. >> >> What do you think about using fifos? Is it more efficient than logging >> to/through a file? What about sockets? Are there any other alternatives? >> What >> are your experiences? >> > > Writing logs from nginx to fifo is bad idea. > > If you need efficient logging on loaded server write log to file. > > 1. file can be rotated as often as need, and moved to dir accessible by > rsync. > 2. files from this dir can be rsync-ed to central server. > > If your need near-real time logs on central server (several minutes lag is > not acceptable) try add something like this to syslog-nd config: > > source nginx_access { program("tail -F -n0 /var/log/nginx/access.log"); }; > > But it will be less efficient and less reliable - some messages can be > lost at syslog-ng restarts, central log server reboot e. t. c. > > -- > Anton Yuzhaninov > > ______________________________**_________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/**mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Nov 27 14:07:48 2012 From: nginx-forum at nginx.us (philipp) Date: Tue, 27 Nov 2012 09:07:48 -0500 Subject: Random order of configuration file reading Message-ID: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> Hello, we have a bunch of servers which are configured 100% equal expect of host specific settings like ip address in the listener using chef/puppet. Nginx seems to read include / config files not in the same order on each server. For example we haven't defined a default vhost on each server... so nginx uses the first loaded file which is exampleA.com on server 1 and exampleB.com on server 2. Furhtermore we use the upstream check status module, the status page is randomly ordered at each server... Is it possible to configure nginx to read config in the alphabeticial order? For example vhosts/exampleA.com (1.) vhosts/exampleB.com (2.) ... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233269,233269#msg-233269 From mdounin at mdounin.ru Tue Nov 27 14:26:48 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Nov 2012 18:26:48 +0400 Subject: nginx-1.3.9 Message-ID: <20121127142648.GS40452@mdounin.ru> Changes with nginx 1.3.9 27 Nov 2012 *) Feature: support for chunked transfer encoding while reading client request body. *) Feature: the $request_time and $msec variables can now be used not only in the "log_format" directive. *) Bugfix: cache manager and cache loader processes might not be able to start if more than 512 listen sockets were used. *) Bugfix: in the ngx_http_dav_module. -- Maxim Dounin http://nginx.com/support.html From aweber at comcast.net Tue Nov 27 15:35:34 2012 From: aweber at comcast.net (AJ Weber) Date: Tue, 27 Nov 2012 10:35:34 -0500 Subject: nginx-1.3.9 In-Reply-To: <20121127142648.GS40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> Message-ID: <50B4DDC6.1010500@comcast.net> Does the "support for chunked transfer encoding..." mean I don't need the extra "chunkin" module to be compiled and linked-in? Is there a new list of directives I should switch-over to (replacing the external, chunkin ones)? Maybe there's a more thorough list of release notes, I'm just not sure where they are offhand. Thanks, AJ On 11/27/2012 9:26 AM, Maxim Dounin wrote: > Changes with nginx 1.3.9 27 Nov 2012 > > *) Feature: support for chunked transfer encoding while reading client > request body. > > *) Feature: the $request_time and $msec variables can now be used not > only in the "log_format" directive. > > *) Bugfix: cache manager and cache loader processes might not be able to > start if more than 512 listen sockets were used. > > *) Bugfix: in the ngx_http_dav_module. > > From vbart at nginx.com Tue Nov 27 15:55:56 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 27 Nov 2012 19:55:56 +0400 Subject: nginx-1.3.9 In-Reply-To: <50B4DDC6.1010500@comcast.net> References: <20121127142648.GS40452@mdounin.ru> <50B4DDC6.1010500@comcast.net> Message-ID: <201211271955.56302.vbart@nginx.com> On Tuesday 27 November 2012 19:35:34 AJ Weber wrote: > Does the "support for chunked transfer encoding..." mean I don't need > the extra "chunkin" module to be compiled and linked-in? Yes. > Is there a new list of directives I should switch-over to (replacing the > external, chunkin ones)? Maybe there's a more thorough list of release > notes, I'm just not sure where they are offhand. There are no special directives at all. It just works. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From aweber at comcast.net Tue Nov 27 16:00:52 2012 From: aweber at comcast.net (AJ Weber) Date: Tue, 27 Nov 2012 11:00:52 -0500 Subject: nginx-1.3.9 In-Reply-To: <201211271955.56302.vbart@nginx.com> References: <20121127142648.GS40452@mdounin.ru> <50B4DDC6.1010500@comcast.net> <201211271955.56302.vbart@nginx.com> Message-ID: <50B4E3B4.8000308@comcast.net> "There are no special directives at all. It just works. " > Those are the best updates! ;) > Thank you! On 11/27/2012 10:55 AM, Valentin V. Bartenev wrote: > On Tuesday 27 November 2012 19:35:34 AJ Weber wrote: >> Does the "support for chunked transfer encoding..." mean I don't need >> the extra "chunkin" module to be compiled and linked-in? > Yes. > >> Is there a new list of directives I should switch-over to (replacing the >> external, chunkin ones)? Maybe there's a more thorough list of release >> notes, I'm just not sure where they are offhand. > There are no special directives at all. It just works. > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From gautier.difolco at gmail.com Tue Nov 27 16:10:02 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 27 Nov 2012 17:10:02 +0100 Subject: Configuring Hg as backend Message-ID: Hi all, I'm trying to set up Hg behind nginx (web view, clone, pull, push) following the official wiki: http://mercurial.selenic.com/wiki/HgServeNginx. I have created an empty repository ($ hg init t1), set up the following file (hgweb): [web] allow_push = * push_ssl = false baseurl = http://localhost/hg/ [paths] / = /tmp/hg/* And the following nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log error; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name localhost; location ~ ^/hg/(.*)$ { proxy_pass http://127.0.0.1:8000/$1; } } I launch $ hg serve --web-conf hgweb. The web view works well, but $ hg clone http://localhost/hg/t1 fails. It prints abort: 'http://localhost/hg/t1/' does not appear to be an hg repository: ---%<--- (text/html; charset=UTF-8) **html output** ---%<--- ! instead of $ hg clone http://localhost:8000/t1 works. Requests are different: - With nginx: 127.0.0.1 - - [27/Nov/2012 16:21:08] "GET /t1/ HTTP/1.0" 200 - 127.0.0.1 - - [27/Nov/2012 16:21:08] "GET /t1/ HTTP/1.0" 200 - 127.0.0.1 - - [27/Nov/2012 16:21:08] "GET /t1/ HTTP/1.0" 200 - 127.0.0.1 - - [27/Nov/2012 16:21:08] "GET /t1/.hg/requires HTTP/1.0" 404 - 127.0.0.1 - - [27/Nov/2012 16:21:08] "GET /t1/.hg/00changelog.i HTTP/1.0" 404 - - Without nginx: 127.0.0.1 - - [27/Nov/2012 16:21:32] "GET /t1/?cmd=capabilities HTTP/1.1" 200 - 127.0.0.1 - - [27/Nov/2012 16:21:32] "GET /t1/?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D 127.0.0.1 - - [27/Nov/2012 16:21:32] "GET /t1/?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases 127.0.0.1 - - [27/Nov/2012 16:21:32] "GET /t1/?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks is there some of you who have had the same issue? How did you solve the problem? Have you got ideas on how to solve it? For your help, Thanks by advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Nov 27 16:23:58 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Nov 2012 20:23:58 +0400 Subject: Configuring Hg as backend In-Reply-To: References: Message-ID: <20121127162358.GA40452@mdounin.ru> Hello! On Tue, Nov 27, 2012 at 05:10:02PM +0100, Gautier DI FOLCO wrote: > Hi all, > > I'm trying to set up Hg behind nginx (web view, clone, pull, push) > following the > official wiki: http://mercurial.selenic.com/wiki/HgServeNginx. > I have created an empty repository ($ hg init t1), set up the following > file (hgweb): > [web] > allow_push = * > push_ssl = false > baseurl = http://localhost/hg/ > > [paths] > / = /tmp/hg/* > > And the following nginx.conf: > user nginx; > worker_processes 1; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > events { > worker_connections 1024; > } > http { > include /etc/nginx/mime.types; > default_type application/octet-stream; > access_log /var/log/nginx/access.log main; > error_log /var/log/nginx/error.log error; > sendfile on; > keepalive_timeout 65; > include /etc/nginx/conf.d/*.conf; > server { > listen 80; > server_name localhost; > location ~ ^/hg/(.*)$ { > proxy_pass http://127.0.0.1:8000/$1; > } [...] > is there some of you who have had the same issue? How did you solve the > problem? > Have you got ideas on how to solve it? You essentially asked nginx to drop request arguments here due to proxy_pass with variables being handled specially by nginx. Use this instead: location /hg/ { proxy_pass http://127.0.0.1:8080/; } More details can be found here: http://nginx.org/r/proxy_pass http://nginx.org/r/location -- Maxim Dounin http://nginx.com/support.html From gautier.difolco at gmail.com Tue Nov 27 16:28:41 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 27 Nov 2012 17:28:41 +0100 Subject: Configuring Hg as backend In-Reply-To: <20121127162358.GA40452@mdounin.ru> References: <20121127162358.GA40452@mdounin.ru> Message-ID: 2012/11/27 Maxim Dounin > You essentially asked nginx to drop request arguments here due to > proxy_pass with variables being handled specially by nginx. Use > this instead: > > location /hg/ { > proxy_pass http://127.0.0.1:8080/; > } > > More details can be found here: > > http://nginx.org/r/proxy_pass > http://nginx.org/r/location > > It works! So fast, thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiwi at oav.net Tue Nov 27 16:26:52 2012 From: kiwi at oav.net (Xavier Beaudouin) Date: Tue, 27 Nov 2012 17:26:52 +0100 Subject: Syslog-ng? In-Reply-To: References: Message-ID: <50B4E9CC.7000303@oav.net> Hi ! On 11/27/12 14:30, Reinis Rozitis wrote: > You can try http://www.grid.net.ru/nginx/udplog.en.html (no idea if > that still compiles with the newer nginx versions). > > You might also contact Valery Kholodkov directly - he has written few > new modules http://www.nginxguts.com/2012/08/better-logging-for-nginx/ > (then again those links provided are broken atm so not sure about the > status of those). I use this kind of things with a central syslog-ng server. I have modified a bit the ngx_udplogger to my own setup. You can find the source here : https://redmine.oav.net/projects/openvisp/repository/revisions/master/show/ngx_udplogger2 (and it compiles with recent nginx). Regards, Xavier From gautier.difolco at gmail.com Tue Nov 27 16:41:36 2012 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Tue, 27 Nov 2012 17:41:36 +0100 Subject: Configuring Hg as backend In-Reply-To: <50B4EB4C.4010608@gmail.com> References: <50B4EB4C.4010608@gmail.com> Message-ID: 2012/11/27 Volodymyr Kostyrko > Maxim is right about arguments. But why not just use uwsgi? > > location ~ ^/hg(/.*) { > include uwsgi_params; > uwsgi_param SCRIPT_NAME ''; > uwsgi_param PATH_INFO $1; > uwsgi_param HOST $host; > uwsgi_param REMOTE_USER $remote_user; > uwsgi_pass unix:/path/to/hg/.socket; > } > > location /static/ { > root /usr/local/lib/python2.7/site-**packages/mercurial/templates; > } > > uwsgi_socket=/path/to/hg/.**socket > uwsgi_flags='-C -L -M --threads 10 --file /where/is/hg/hgweb.wsgi' > > hgweb.conf: > [web] > prefix = /hg > baseurl = http://hostname/hg/ > > [paths] > / = /where/is/hg/* > Thanks for your answer. I didn't try this way because I didn't know it. Thank you for this hint. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at gmail.com Tue Nov 27 19:35:39 2012 From: dewanggaba at gmail.com (antituhan) Date: Tue, 27 Nov 2012 11:35:39 -0800 (PST) Subject: having lot of waiting connection will cause high CPU usage? In-Reply-To: References: Message-ID: <1354044938814-7582673.post@n2.nabble.com> Could you post 10 line of this command : Have you insert reset_timedout_connection directive to your conf? ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/having-lot-of-waiting-connection-will-cause-high-CPU-usage-tp7582492p7582673.html Sent from the nginx mailing list archive at Nabble.com. From chris+nginx at schug.net Tue Nov 27 20:32:12 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Tue, 27 Nov 2012 21:32:12 +0100 Subject: nginx-1.3.9 In-Reply-To: <20121127142648.GS40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> Message-ID: <072d3d41234850ce1c3f7674590e3ef6@schug.net> On 2012-11-27 15:26, Maxim Dounin wrote: > Changes with nginx 1.3.9 27 > Nov 2012 [...] Thanks :) Will there be an updated version of the SPDY patch as well? It looks like patch.spdy-53.txt just needs minor adjustments in src/http/ngx_http.h and src/http/ngx_http_request_body.c (at least it applies, compiles and runs in my case after doing so) but I am unsure if there is anything else behind the scenes which might break it sooner or later. Cheers -cs From pawel at cojestgrane.pl Tue Nov 27 20:58:42 2012 From: pawel at cojestgrane.pl (=?ISO-8859-2?Q?Pawe=B3_Marzec?=) Date: Tue, 27 Nov 2012 21:58:42 +0100 Subject: nginx-1.3.9 & nginx upload module 2.0.2 In-Reply-To: <20121127142648.GS40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> Message-ID: Unexpectedly compiling fresh 1.3.9 with nginx_upload_module I've met objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx_upload_module-2.2.0/ ngx_http_upload_module.o \ ../nginx_upload_module-2.2.0/ngx_http_upload_module.c ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function ?ngx_http_read_upload_client_request_body?: ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2628: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2687: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function ?ngx_http_do_read_upload_client_request_body?: ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2769: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2785: error: ?ngx_http_request_body_t? has no member named ?to_write? ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2877: error: ?ngx_http_request_body_t? has no member named ?to_write? make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ ngx_http_upload_module.o] Error 1 make[1]: Leaving directory `/root/nginx-1.3.9' make: *** [build] Error 2 any ideas? Pawe? Marzec Wiadomo?? napisana w dniu 2012-11-27, o godz. 15:26, przez Maxim Dounin: > Changes with nginx 1.3.9 27 > Nov 2012 > > *) Feature: support for chunked transfer encoding while reading > client > request body. > > *) Feature: the $request_time and $msec variables can now be used > not > only in the "log_format" directive. > > *) Bugfix: cache manager and cache loader processes might not be > able to > start if more than 512 listen sockets were used. > > *) Bugfix: in the ngx_http_dav_module. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Nov 27 21:56:19 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 01:56:19 +0400 Subject: nginx-1.3.9 & nginx upload module 2.0.2 In-Reply-To: References: <20121127142648.GS40452@mdounin.ru> Message-ID: <20121127215619.GC40452@mdounin.ru> Hello! On Tue, Nov 27, 2012 at 09:58:42PM +0100, Pawe? Marzec wrote: > Unexpectedly compiling fresh 1.3.9 with nginx_upload_module I've met > > objs -I src/http -I src/http/modules -I src/mail \ > -o objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o > \ > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function > ?ngx_http_read_upload_client_request_body?: > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2628: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2687: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c: In function > ?ngx_http_do_read_upload_client_request_body?: > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2769: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2785: error: > ?ngx_http_request_body_t? has no member named ?to_write? > ../nginx_upload_module-2.2.0/ngx_http_upload_module.c:2877: error: > ?ngx_http_request_body_t? has no member named ?to_write? > make[1]: *** [objs/addon/nginx_upload_module-2.2.0/ngx_http_upload_module.o] > Error 1 > make[1]: Leaving directory `/root/nginx-1.3.9' > make: *** [build] Error 2 > > any ideas? Looks like upload module depends on request body reading code internal details, and it needs updating. -- Maxim Dounin http://nginx.com/support.html From vbart at nginx.com Tue Nov 27 22:32:46 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 28 Nov 2012 02:32:46 +0400 Subject: nginx-1.3.9 In-Reply-To: <072d3d41234850ce1c3f7674590e3ef6@schug.net> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> Message-ID: <201211280232.47028.vbart@nginx.com> On Wednesday 28 November 2012 00:32:12 Christoph Schug wrote: > On 2012-11-27 15:26, Maxim Dounin wrote: > > Changes with nginx 1.3.9 27 > > Nov 2012 > > [...] > > Thanks :) Will there be an updated version of the SPDY patch as well? > It looks like patch.spdy-53.txt just needs minor adjustments in > src/http/ngx_http.h and src/http/ngx_http_request_body.c (at least it > applies, compiles and runs in my case after doing so) but I am unsure if > there is anything else behind the scenes which might break it sooner or > later. > Don't worry, it's indeed minor adjustments while applying. It should work as well as with the previous nginx version. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From agentzh at gmail.com Tue Nov 27 23:43:47 2012 From: agentzh at gmail.com (agentzh) Date: Tue, 27 Nov 2012 15:43:47 -0800 Subject: nginx+lua reverse proxy empty body In-Reply-To: <50B495D8.8020106@spilgames.com> References: <50B495D8.8020106@spilgames.com> Message-ID: Hello! On Tue, Nov 27, 2012 at 2:28 AM, Bart van Deenen wrote: > The problem I have basically that the ngx.arg[1] is an empty string > (sometimes, timing dependent?) on url's that are definitely not empty. > It is normal that ngx.arg[1] is an empty string in the body filters when the upstream module generates "pure special bufs" like those with only the "last_buf" flag set (i.e., the eof flag set on the Lua land). It's normal that for a given response, the output body filter gets called multiple times because that's exactly how streaming processing works in Nginx (you surely do not want to buffer all the data at a time for huge responses). And the response body may be fed into your body filter in multiple data chunks. You should always be prepared for that in your Lua code. Please refer to the documentation for body_filter_by_lua for more information: http://wiki.nginx.org/HttpLuaModule#body_filter_by_lua BTW, doing simple regex match in body filters may not always work as expected because the nginx upstream module may split the response body into chunks in an arbitrary way (e.g., splitting in the middle of the word "Speel", for example). I've been working on the sregex C library that will support streaming match just like Ragel: https://github.com/agentzh/sregex It's still in progress though but it'll soon be usable on the Lua land :) Best regards, -agentzh From sheng.zheng at etimestech.jp Wed Nov 28 04:54:45 2012 From: sheng.zheng at etimestech.jp (Sheng.Zheng) Date: Wed, 28 Nov 2012 13:54:45 +0900 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <201211280232.47028.vbart@nginx.com> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> Message-ID: <50B59915.1040609@etimestech.jp> Hi all After I replaced the 3rd party h.264 streaming module with the native one.I found that there was no sound when playing some of MP4 files with MP3 audio codec. Is there anyway to pseudo-streaming MP4 files with H.264/MP3 codec by using the native mp4 module.Or any plan to support this type of MP4 files in the future ? Thank you. Sheng From igor at sysoev.ru Wed Nov 28 05:49:12 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 28 Nov 2012 09:49:12 +0400 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <50B59915.1040609@etimestech.jp> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> Message-ID: <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> On Nov 28, 2012, at 8:54 , Sheng.Zheng wrote: > Hi all > > After I replaced the 3rd party h.264 streaming module with the native one.I found that there was no sound when playing some of MP4 files with MP3 audio codec. Is there anyway to pseudo-streaming MP4 files with H.264/MP3 codec by using the native mp4 module.Or any plan to support this type of MP4 files in the future ? Could you try nginx version 1.3.5 or newer ? Changes with nginx 1.3.5 21 Aug 2012 *) Change: the ngx_http_mp4_module module no longer skips tracks in formats other than H.264 and AAC. -- Igor Sysoev http://nginx.com/support.html From sheng.zheng at etimestech.jp Wed Nov 28 07:51:16 2012 From: sheng.zheng at etimestech.jp (Sheng.Zheng) Date: Wed, 28 Nov 2012 16:51:16 +0900 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> Message-ID: <50B5C274.2060703@etimestech.jp> Thanks Igor,I can pseudo-streaming H.264/MP3 files after update from 1.2.5 to 1.3.9. Do you plan to back port this change to 1.2.x stable version or i have to wait for the 1.4.x. Sheng On 2012/11/28 14:49, Igor Sysoev wrote: > On Nov 28, 2012, at 8:54 , Sheng.Zheng wrote: > >> Hi all >> >> After I replaced the 3rd party h.264 streaming module with the native one.I found that there was no sound when playing some of MP4 files with MP3 audio codec. Is there anyway to pseudo-streaming MP4 files with H.264/MP3 codec by using the native mp4 module.Or any plan to support this type of MP4 files in the future ? > > Could you try nginx version 1.3.5 or newer ? > > Changes with nginx 1.3.5 21 Aug 2012 > > *) Change: the ngx_http_mp4_module module no longer skips tracks in > formats other than H.264 and AAC. > > > -- > Igor Sysoev > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From crirus at gmail.com Wed Nov 28 08:04:58 2012 From: crirus at gmail.com (Cristian Rusu) Date: Wed, 28 Nov 2012 10:04:58 +0200 Subject: Rewrite all directory URLs with certain exceptions Message-ID: Hello I have urls like example.com/123 I need them rewritten to example.com/?v=123 However, I want to skip certain directories from rewrite eg. example.com/status example.com/admin right now I have this: if (!-e $request_filename){ rewrite ^/([A-Za-z0-9-]+)/?$ http://www.example.com/v.php?dl=$1redirect; } Problem is that all urls are rewritten.. how do I put some exceptions? Thanks for any suggestion --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From edho at myconan.net Wed Nov 28 08:06:12 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 28 Nov 2012 15:06:12 +0700 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: References: Message-ID: On Wed, Nov 28, 2012 at 3:04 PM, Cristian Rusu wrote: > Hello > > I have urls like example.com/123 > I need them rewritten to example.com/?v=123 > > However, I want to skip certain directories from rewrite > eg. > example.com/status > example.com/admin > > right now I have this: > > if (!-e $request_filename){ > rewrite ^/([A-Za-z0-9-]+)/?$ http://www.example.com/v.php?dl=$1 > redirect; > } > > Problem is that all urls are rewritten.. how do I put some exceptions? > put the rewrite in a location block (e.g. inside location / { }). From igor at sysoev.ru Wed Nov 28 08:10:32 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 28 Nov 2012 12:10:32 +0400 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <50B5C274.2060703@etimestech.jp> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> Message-ID: <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> On Nov 28, 2012, at 11:51 , Sheng.Zheng wrote: > Thanks Igor,I can pseudo-streaming H.264/MP3 files after update from 1.2.5 to 1.3.9. > Do you plan to back port this change to 1.2.x stable version or i have to wait for the 1.4.x. I can not say if will it be merged in 1.2.x, but 1.3.x is quite stable. Also you can use this patch for 1.2.x: http://trac.nginx.org/nginx/changeset/4821/nginx -- Igor Sysoev http://nginx.com/support.html > On 2012/11/28 14:49, Igor Sysoev wrote: >> On Nov 28, 2012, at 8:54 , Sheng.Zheng wrote: >> >>> Hi all >>> >>> After I replaced the 3rd party h.264 streaming module with the native one.I found that there was no sound when playing some of MP4 files with MP3 audio codec. Is there anyway to pseudo-streaming MP4 files with H.264/MP3 codec by using the native mp4 module.Or any plan to support this type of MP4 files in the future ? >> >> Could you try nginx version 1.3.5 or newer ? >> >> Changes with nginx 1.3.5 21 Aug 2012 >> >> *) Change: the ngx_http_mp4_module module no longer skips tracks in >> formats other than H.264 and AAC. From mdounin at mdounin.ru Wed Nov 28 08:30:35 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 12:30:35 +0400 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: References: Message-ID: <20121128083035.GD40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 10:04:58AM +0200, Cristian Rusu wrote: > Hello > > I have urls like example.com/123 > I need them rewritten to example.com/?v=123 > > However, I want to skip certain directories from rewrite > eg. > example.com/status > example.com/admin > > right now I have this: > > if (!-e $request_filename){ > rewrite ^/([A-Za-z0-9-]+)/?$ http://www.example.com/v.php?dl=$1redirect; > } > > Problem is that all urls are rewritten.. how do I put some exceptions? I would recommend using location matching to differentiate URIs which should be handled differently. E.g. location / { # you may want to use try_files here instead if (...) { rewrite ... } ... } location /status { ... } location /admin { ... } See http://nginx.org/r/location for more information. -- Maxim Dounin http://nginx.com/support.html From chris+nginx at schug.net Wed Nov 28 09:45:54 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Wed, 28 Nov 2012 10:45:54 +0100 Subject: nginx-1.3.9 In-Reply-To: <201211280232.47028.vbart@nginx.com> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> Message-ID: <539ab9e72a20faeecff29d377552fa1c@schug.net> On 2012-11-27 23:32, Valentin V. Bartenev wrote: > Don't worry, it's indeed minor adjustments while applying. > It should work as well as with the previous nginx version. Great, thanks for confirmation. If anyone is interested for reasons of convenience, I applied the patch to nginx 1.3.9, resolved the rejects and did a fresh unified diff. https://www.schug.net/dist/nginx/patch.spdy-53-1.3.9.txt So feel free to use it until patch.spdy-54.txt is being released at http://nginx.org/patches/spdy/ -cs From citrin at citrin.ru Wed Nov 28 09:52:33 2012 From: citrin at citrin.ru (Anton Yuzhaninov) Date: Wed, 28 Nov 2012 13:52:33 +0400 Subject: Syslog-ng? In-Reply-To: References: <50B4BCA6.8030100@citrin.ru> Message-ID: <50B5DEE1.70508@citrin.ru> On 11/27/12 17:39, Rafa? Radecki wrote: > Why is logging to fifo a bad idea? > 1. If syslog-ng will stop to read data from fifo by some reason (or if syslog-ng will be terminated), nginx will be blocked on writing to log and will stop to respond to requests. So it is unreliable. 2. Ewen when syslog-ng work, this add unpredictable delays to nginx request processing, when nginx writes log faster, than syslog-ng can read at this moment. -- Anton Yuzhaninov From sheng.zheng at etimestech.jp Wed Nov 28 10:23:25 2012 From: sheng.zheng at etimestech.jp (Sheng.Zheng) Date: Wed, 28 Nov 2012 19:23:25 +0900 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> Message-ID: <50B5E61D.3050902@etimestech.jp> Thank you for the patch. I can pseudo-streaming H.264/MP3 files on 1.2.5 now. But I just noticed that still some of H.264/MP3 files converted by FFmpeg has no sound even testing on 1.3.9. I can send you a sample file if you need it. Sheng On 2012/11/28 17:10, Igor Sysoev wrote: > On Nov 28, 2012, at 11:51 , Sheng.Zheng wrote: > >> Thanks Igor,I can pseudo-streaming H.264/MP3 files after update from 1.2.5 to 1.3.9. >> Do you plan to back port this change to 1.2.x stable version or i have to wait for the 1.4.x. > > I can not say if will it be merged in 1.2.x, but 1.3.x is quite stable. > Also you can use this patch for 1.2.x: > http://trac.nginx.org/nginx/changeset/4821/nginx > From vbart at nginx.com Wed Nov 28 10:54:02 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 28 Nov 2012 14:54:02 +0400 Subject: nginx-1.3.9 In-Reply-To: <539ab9e72a20faeecff29d377552fa1c@schug.net> References: <20121127142648.GS40452@mdounin.ru> <201211280232.47028.vbart@nginx.com> <539ab9e72a20faeecff29d377552fa1c@schug.net> Message-ID: <201211281454.02955.vbart@nginx.com> On Wednesday 28 November 2012 13:45:54 Christoph Schug wrote: > On 2012-11-27 23:32, Valentin V. Bartenev wrote: > > Don't worry, it's indeed minor adjustments while applying. > > It should work as well as with the previous nginx version. > > Great, thanks for confirmation. If anyone is interested for reasons of > convenience, I applied the patch to nginx 1.3.9, resolved the rejects > and did a fresh unified diff. Rejects? Are you sure? Could you show me them? Because I see only some small offsets and two little fuzz: vbart at vbart-laptop nginx-1.3.9 % patch -p0 < patch.spdy.txt patching file src/http/ngx_http_spdy.h patching file src/http/ngx_http.h Hunk #1 succeeded at 24 (offset 1 line). Hunk #2 succeeded at 39 (offset 1 line). Hunk #3 succeeded at 94 (offset 8 lines). Hunk #4 succeeded at 113 with fuzz 1 (offset 10 lines). Hunk #5 succeeded at 121 (offset 10 lines). patching file src/http/ngx_http_parse.c patching file src/http/modules/ngx_http_ssl_module.c patching file src/http/modules/ngx_http_limit_req_module.c patching file src/http/ngx_http_spdy_module.c patching file src/http/ngx_http_request.c Hunk #10 succeeded at 1986 (offset 3 lines). Hunk #11 succeeded at 3138 (offset 3 lines). patching file src/http/ngx_http_spdy_filter_module.c patching file src/http/ngx_http_spdy_module.h patching file src/http/ngx_http_request.h Hunk #1 succeeded at 270 (offset 1 line). Hunk #2 succeeded at 423 (offset 3 lines). patching file src/http/ngx_http_core_module.c Hunk #1 succeeded at 2130 (offset 1 line). Hunk #2 succeeded at 4082 (offset 1 line). patching file src/http/ngx_http_upstream.c patching file src/http/ngx_http_core_module.h patching file src/http/ngx_http_request_body.c Hunk #1 succeeded at 41 with fuzz 2 (offset 2 lines). Hunk #2 succeeded at 476 (offset 5 lines). patching file src/http/ngx_http_spdy.c patching file src/http/ngx_http.c patching file auto/sources patching file auto/options patching file auto/modules No rejects. wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From zhuzhaoyuan at gmail.com Wed Nov 28 11:48:27 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Wed, 28 Nov 2012 19:48:27 +0800 Subject: Random order of configuration file reading In-Reply-To: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On Tue, Nov 27, 2012 at 10:07 PM, philipp wrote: > Hello, > > we have a bunch of servers which are configured 100% equal expect of host > specific settings like ip address in the listener using chef/puppet. Nginx > seems to read include / config files not in the same order on each server. > > For example we haven't defined a default vhost on each server... so nginx > uses the first loaded file which is exampleA.com on server 1 and > exampleB.com on server 2. > > Furhtermore we use the upstream check status module, the status page is > randomly ordered at each server... > > Is it possible to configure nginx to read config in the alphabeticial > order? > > For example > > vhosts/exampleA.com (1.) > vhosts/exampleB.com (2.) > This is a known issue. It has been fixed in our own Tengine distribution: https://github.com/taobao/tengine/blob/master/src/os/unix/ngx_files.c#L362 If you want to use official nginx only, you can apply the patch below: @Maxim: Would you please consider to apply this patch to the trunk? Thanks in advance. Index: src/os/unix/ngx_files.c =================================================================== --- src/os/unix/ngx_files.c (revision 4942) +++ src/os/unix/ngx_files.c (working copy) @@ -363,7 +363,7 @@ { int n; - n = glob((char *) gl->pattern, GLOB_NOSORT, NULL, &gl->pglob); + n = glob((char *) gl->pattern, 0, NULL, &gl->pglob); if (n == 0) { return NGX_OK; Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: include-sorted.patch Type: application/octet-stream Size: 395 bytes Desc: not available URL: From crirus at gmail.com Wed Nov 28 12:11:01 2012 From: crirus at gmail.com (Cristian Rusu) Date: Wed, 28 Nov 2012 14:11:01 +0200 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: <20121128083035.GD40452@mdounin.ru> References: <20121128083035.GD40452@mdounin.ru> Message-ID: Excellent, it worked I have another urgent matter on a server live we just switched to I need that links like this: http://www.example.com/?v=JYH253CT to be rewritten as http://www.example.com/v.php?dl=JYH253CT Please help, server is down :( --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Wed, Nov 28, 2012 at 10:30 AM, Maxim Dounin wrote: > Hello! > > On Wed, Nov 28, 2012 at 10:04:58AM +0200, Cristian Rusu wrote: > > > Hello > > > > I have urls like example.com/123 > > I need them rewritten to example.com/?v=123 > > > > However, I want to skip certain directories from rewrite > > eg. > > example.com/status > > example.com/admin > > > > right now I have this: > > > > if (!-e $request_filename){ > > rewrite ^/([A-Za-z0-9-]+)/?$ > http://www.example.com/v.php?dl=$1redirect; > > } > > > > Problem is that all urls are rewritten.. how do I put some exceptions? > > I would recommend using location matching to differentiate URIs > which should be handled differently. E.g. > > location / { > # you may want to use try_files here instead > if (...) { > rewrite ... > } > ... > } > > location /status { > ... > } > > location /admin { > ... > } > > See http://nginx.org/r/location for more information. > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Nov 28 12:14:16 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 16:14:16 +0400 Subject: nginx-1.3.9 In-Reply-To: <201211281454.02955.vbart@nginx.com> References: <20121127142648.GS40452@mdounin.ru> <201211280232.47028.vbart@nginx.com> <539ab9e72a20faeecff29d377552fa1c@schug.net> <201211281454.02955.vbart@nginx.com> Message-ID: <20121128121416.GF40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 02:54:02PM +0400, Valentin V. Bartenev wrote: > On Wednesday 28 November 2012 13:45:54 Christoph Schug wrote: > > On 2012-11-27 23:32, Valentin V. Bartenev wrote: > > > Don't worry, it's indeed minor adjustments while applying. > > > It should work as well as with the previous nginx version. > > > > Great, thanks for confirmation. If anyone is interested for reasons of > > convenience, I applied the patch to nginx 1.3.9, resolved the rejects > > and did a fresh unified diff. > > Rejects? Are you sure? Could you show me them? Because I see only some > small offsets and two little fuzz: > > vbart at vbart-laptop nginx-1.3.9 % patch -p0 < patch.spdy.txt > patching file src/http/ngx_http_spdy.h > patching file src/http/ngx_http.h > Hunk #1 succeeded at 24 (offset 1 line). > Hunk #2 succeeded at 39 (offset 1 line). > Hunk #3 succeeded at 94 (offset 8 lines). > Hunk #4 succeeded at 113 with fuzz 1 (offset 10 lines). > Hunk #5 succeeded at 121 (offset 10 lines). [...] > No rejects. What about just producing a new patch? It would be easier for anyone who use it. There is more than one patch(1) implementations floating around, and I wouldn't be surprised fuzz in one of them will be reject in another one. -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Nov 28 12:23:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 16:23:00 +0400 Subject: Random order of configuration file reading In-Reply-To: References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20121128122259.GG40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 07:48:27PM +0800, Joshua Zhu wrote: > Hi, > > On Tue, Nov 27, 2012 at 10:07 PM, philipp wrote: > > > Hello, > > > > we have a bunch of servers which are configured 100% equal expect of host > > specific settings like ip address in the listener using chef/puppet. Nginx > > seems to read include / config files not in the same order on each server. > > > > For example we haven't defined a default vhost on each server... so nginx > > uses the first loaded file which is exampleA.com on server 1 and > > exampleB.com on server 2. > > > > Furhtermore we use the upstream check status module, the status page is > > randomly ordered at each server... > > > > Is it possible to configure nginx to read config in the alphabeticial > > order? > > > > For example > > > > vhosts/exampleA.com (1.) > > vhosts/exampleB.com (2.) > > > > This is a known issue. It has been fixed in our own > Tengine > distribution: > https://github.com/taobao/tengine/blob/master/src/os/unix/ngx_files.c#L362 > > If you want to use official nginx only, you can apply the patch below: > > @Maxim: > Would you please consider to apply this patch to the trunk? Thanks in > advance. I've already proposed removing GLOB_NOSORT to Igor a while ago. His position on this is to keep this in sync with Windows version where there is no sort guaranties. [...] -- Maxim Dounin http://nginx.com/support.html From edho at myconan.net Wed Nov 28 12:25:20 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 28 Nov 2012 19:25:20 +0700 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: References: <20121128083035.GD40452@mdounin.ru> Message-ID: On Wed, Nov 28, 2012 at 7:11 PM, Cristian Rusu wrote: > Excellent, it worked > > I have another urgent matter on a server live we just switched to > > I need that links like this: > http://www.example.com/?v=JYH253CT > > to be rewritten as http://www.example.com/v.php?dl=JYH253CT > > > Please help, server is down :( > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables Maybe something like this. location = / { if ($arg_v) { rewrite ^ /v.php?dl=$arg_v; } } From appa at perusio.net Wed Nov 28 12:27:04 2012 From: appa at perusio.net (Antonio P.P. Almeida) Date: Wed, 28 Nov 2012 13:27:04 +0100 Subject: Random order of configuration file reading In-Reply-To: <20121128122259.GG40452@mdounin.ru> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> Message-ID: <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> > Hello! > I've already proposed removing GLOB_NOSORT to Igor a while ago. > His position on this is to keep this in sync with Windows version > where there is no sort guaranties. > > [...] Can't we just pass that as a config option so that the configure script detects the build environment and if were on *NIX the sorting is enabled? Thanks, --appa From appa at perusio.net Wed Nov 28 12:28:49 2012 From: appa at perusio.net (Antonio P.P. Almeida) Date: Wed, 28 Nov 2012 13:28:49 +0100 Subject: Random order of configuration file reading In-Reply-To: <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> Message-ID: >> Hello! > >> I've already proposed removing GLOB_NOSORT to Igor a while ago. >> His position on this is to keep this in sync with Windows version >> where there is no sort guaranties. >> >> [...] > > Can't we just pass that as a config option so that the configure script > detects the build environment and if were on *NIX the sorting is enabled? s/were/we're/ --appa From crirus at gmail.com Wed Nov 28 12:33:04 2012 From: crirus at gmail.com (Cristian Rusu) Date: Wed, 28 Nov 2012 14:33:04 +0200 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: References: <20121128083035.GD40452@mdounin.ru> Message-ID: On Wed, Nov 28, 2012 at 2:25 PM, Edho Arief wrote: > On Wed, Nov 28, 2012 at 7:11 PM, Cristian Rusu wrote: > > Excellent, it worked > > > > I have another urgent matter on a server live we just switched to > > > > I need that links like this: > > http://www.example.com/?v=JYH253CT > > > > to be rewritten as http://www.example.com/v.php?dl=JYH253CT > > > > > > Please help, server is down :( > > > > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > Maybe something like this. > > location = / { > if ($arg_v) { > rewrite ^ /v.php?dl=$arg_v; > } > } > I tried this from a htaccess to nginx converter if ($query_string ~ "^v=(.*)$"){ rewrite ^/index\.php$ /v.php?dl=$1 break; } But the resulting url is /v.php?dl=%1&v=FER34S beats me!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Wed Nov 28 12:55:04 2012 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 28 Nov 2012 14:55:04 +0200 Subject: Random order of configuration file reading In-Reply-To: <20121128122259.GG40452@mdounin.ru> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> Message-ID: <50B609A8.6010905@csdoc.com> On 28.11.2012 14:23, Maxim Dounin wrote: >> This is a known issue. It has been fixed in our own >> Tengine >> distribution: >> https://github.com/taobao/tengine/blob/master/src/os/unix/ngx_files.c#L362 >> >> If you want to use official nginx only, you can apply the patch below: >> >> @Maxim: >> Would you please consider to apply this patch to the trunk? Thanks in >> advance. > > I've already proposed removing GLOB_NOSORT to Igor a while ago. > His position on this is to keep this in sync with Windows version > where there is no sort guaranties. this just can be described in documentation as drawback of windows version of nginx, as many other "Known issues" and limitations at http://nginx.org/en/docs/windows.html because the UNIX version of nginx is the major and main stream. so the best possible capabilities must be present in UNIX version, not in windows one. and windows version must be compatible with UNIX version, if this is possible. now - capabilities of mainstream UNIX version artificially limited to be "compatible" with even not-production-ready windows version. this is very strange and unexpected and confusing for users of mainstream UNIX/Linux version of nginx (and this is more than 99% of all use cases of nginx) for example, windows 7 not support cache module and many other modules - but this is not reason for remove this feature from UNIX version of nginx. P.S. similar example: method java.io.File.renameTo (function "rename")works differently on Windows and UNIX, this is "Known issues" and can't be avoided in any case. btw, similar "rename bugs" and incompatibilities also must be presented also in windows/unix version of nginx http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4017593 java.io.File.renameTo has different semantics on Solaris and Win32 http://pubs.opengroup.org/onlinepubs/009695399/functions/rename.html rename -- Best regards, Gena From appa at perusio.net Wed Nov 28 12:58:43 2012 From: appa at perusio.net (Antonio P.P. Almeida) Date: Wed, 28 Nov 2012 13:58:43 +0100 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: References: <20121128083035.GD40452@mdounin.ru> Message-ID: <9d8d8c8b70a134334c4e5a517295a10b.squirrel@damiao.org> > On Wed, Nov 28, 2012 at 2:25 PM, Edho Arief wrote: > >> On Wed, Nov 28, 2012 at 7:11 PM, Cristian Rusu wrote: >> > Excellent, it worked >> > >> > I have another urgent matter on a server live we just switched to >> > >> > I need that links like this: >> > http://www.example.com/?v=JYH253CT >> > >> > to be rewritten as http://www.example.com/v.php?dl=JYH253CT >> > >> > >> > Please help, server is down :( >> > >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#variables >> >> Maybe something like this. >> >> location = / { >> if ($arg_v) { >> rewrite ^ /v.php?dl=$arg_v; >> } >> } >> > > I tried this from a htaccess to nginx converter > > if ($query_string ~ "^v=(.*)$"){ > rewrite ^/index\.php$ /v.php?dl=$1 break; > } Try: if ($arg_v) { rewrite ^ /v.php?dl=$arg_v break; } --appa From crirus at gmail.com Wed Nov 28 13:03:41 2012 From: crirus at gmail.com (Cristian Rusu) Date: Wed, 28 Nov 2012 15:03:41 +0200 Subject: Rewrite all directory URLs with certain exceptions In-Reply-To: <9d8d8c8b70a134334c4e5a517295a10b.squirrel@damiao.org> References: <20121128083035.GD40452@mdounin.ru> <9d8d8c8b70a134334c4e5a517295a10b.squirrel@damiao.org> Message-ID: Yes, this worked, I was also trying to get an idea of that matching in my attempt... Why is $1 not matching right? --------------------------------------------------------------- Cristian Rusu Web Developement & Electronic Publishing ====== Crilance.com Crilance.blogspot.com On Wed, Nov 28, 2012 at 2:58 PM, Antonio P.P. Almeida wrote: > > On Wed, Nov 28, 2012 at 2:25 PM, Edho Arief wrote: > > > >> On Wed, Nov 28, 2012 at 7:11 PM, Cristian Rusu > wrote: > >> > Excellent, it worked > >> > > >> > I have another urgent matter on a server live we just switched to > >> > > >> > I need that links like this: > >> > http://www.example.com/?v=JYH253CT > >> > > >> > to be rewritten as http://www.example.com/v.php?dl=JYH253CT > >> > > >> > > >> > Please help, server is down :( > >> > > >> > >> http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > >> > >> Maybe something like this. > >> > >> location = / { > >> if ($arg_v) { > >> rewrite ^ /v.php?dl=$arg_v; > >> } > >> } > >> > > > > I tried this from a htaccess to nginx converter > > > > if ($query_string ~ "^v=(.*)$"){ > > rewrite ^/index\.php$ /v.php?dl=$1 break; > > } > > Try: > > if ($arg_v) { > rewrite ^ /v.php?dl=$arg_v break; > } > > --appa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Nov 28 14:05:29 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 28 Nov 2012 18:05:29 +0400 Subject: nginx-1.3.9 In-Reply-To: <20121128121416.GF40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> <201211281454.02955.vbart@nginx.com> <20121128121416.GF40452@mdounin.ru> Message-ID: <201211281805.29564.vbart@nginx.com> On Wednesday 28 November 2012 16:14:16 Maxim Dounin wrote: > Hello! > > On Wed, Nov 28, 2012 at 02:54:02PM +0400, Valentin V. Bartenev wrote: > > On Wednesday 28 November 2012 13:45:54 Christoph Schug wrote: > > > On 2012-11-27 23:32, Valentin V. Bartenev wrote: > > > > Don't worry, it's indeed minor adjustments while applying. > > > > It should work as well as with the previous nginx version. > > > > > > Great, thanks for confirmation. If anyone is interested for reasons of > > > convenience, I applied the patch to nginx 1.3.9, resolved the rejects > > > and did a fresh unified diff. > > > > Rejects? Are you sure? Could you show me them? Because I see only some > > small offsets and two little fuzz: > > > > vbart at vbart-laptop nginx-1.3.9 % patch -p0 < patch.spdy.txt > > patching file src/http/ngx_http_spdy.h > > patching file src/http/ngx_http.h > > Hunk #1 succeeded at 24 (offset 1 line). > > Hunk #2 succeeded at 39 (offset 1 line). > > Hunk #3 succeeded at 94 (offset 8 lines). > > Hunk #4 succeeded at 113 with fuzz 1 (offset 10 lines). > > Hunk #5 succeeded at 121 (offset 10 lines). > > [...] > > > No rejects. > > What about just producing a new patch? It would be easier for > anyone who use it. > > There is more than one patch(1) implementations floating around, > and I wouldn't be surprised fuzz in one of them will be reject in > another one. Ok, you're right, I haven't been thought that this time the problem is so serious. The patch was updated: http://nginx.org/patches/spdy/patch.spdy-54.txt wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From mdounin at mdounin.ru Wed Nov 28 14:15:50 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 18:15:50 +0400 Subject: Random order of configuration file reading In-Reply-To: <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> Message-ID: <20121128141550.GI40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 01:27:04PM +0100, Antonio P.P. Almeida wrote: > > Hello! > > > I've already proposed removing GLOB_NOSORT to Igor a while ago. > > His position on this is to keep this in sync with Windows version > > where there is no sort guaranties. > > > > [...] > > Can't we just pass that as a config option so that the configure script > detects the build environment and if were on *NIX the sorting is enabled? The codepath in question is unix-only, the question is about user experience which will be different on unix and win32 with GLOB_NOSORT removed on unix. Right now one can't rely on wildcard include ordering, and this is consistent for all platforms supported. And "listen ... default" should be used to mark default server if one uses wildcard include to include multiple server blocks listening on the same some ip:port. With the GLOB_NOSORT removed the behaviour will be different on unix (included files will be sorted) and win32 (included files are not guaranteed to be sorted), which is considered bad. (I personally think that GLOB_NOSORT should be removed anyway. I'll talk to Igor again about this.) -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Wed Nov 28 14:30:14 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Nov 2012 18:30:14 +0400 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <50B5E61D.3050902@etimestech.jp> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> <50B5E61D.3050902@etimestech.jp> Message-ID: <20121128143014.GJ40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 07:23:25PM +0900, Sheng.Zheng wrote: > Thank you for the patch. I can pseudo-streaming H.264/MP3 files on > 1.2.5 now. But I just noticed that still some of H.264/MP3 files > converted by FFmpeg has no sound even testing on 1.3.9. I can send > you a sample file if you need it. Yes, please provide links to a couple of sample files which has problems with streaming. -- Maxim Dounin http://nginx.com/support.html From valeriano.cossu at gmail.com Wed Nov 28 14:54:58 2012 From: valeriano.cossu at gmail.com (Valeriano Cossu) Date: Wed, 28 Nov 2012 09:54:58 -0500 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <50B59915.1040609@etimestech.jp> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> Message-ID: Hello sheng, I am not an expert on nginx, but the streaming of "something (mp3, h.264, etc)" it's not indipendent from the content/codec, how to decode the streaming it's a client/consumer aspect, or no? On 11/27/12, Sheng.Zheng wrote: > Hi all > > After I replaced the 3rd party h.264 streaming module with the native > one.I found that there was no sound when playing some of MP4 files with > MP3 audio codec. Is there anyway to pseudo-streaming MP4 files with > H.264/MP3 codec by using the native mp4 module.Or any plan to support > this type of MP4 files in the future ? > > Thank you. > > Sheng > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Cordiali saluti, Valeriano Cossu Regards, Valeriano Cossu cell: (+39) 3462187419 skype: valerianocossu From edho at myconan.net Wed Nov 28 14:57:40 2012 From: edho at myconan.net (Edho Arief) Date: Wed, 28 Nov 2012 21:57:40 +0700 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> Message-ID: On Wed, Nov 28, 2012 at 9:54 PM, Valeriano Cossu wrote: > Hello sheng, > > I am not an expert on nginx, but the streaming of "something (mp3, > h.264, etc)" it's not indipendent from the content/codec, how to > decode the streaming it's a client/consumer > aspect, or no? > I believe it's to support start= parameter. From Bart.vanDeenen at spilgames.com Wed Nov 28 15:28:53 2012 From: Bart.vanDeenen at spilgames.com (Bart van Deenen) Date: Wed, 28 Nov 2012 15:28:53 +0000 Subject: nginx+lua reverse proxy empty body In-Reply-To: References: <50B495D8.8020106@spilgames.com>, Message-ID: Hi Agentz But wouldn't the statement client_body_in_single_buffer on; cause the whole body of the proxied server to go into ngx.arg[1] ? And I also don't understand that my example code shouldn't work reliably, even if the proxied data is passed through it in chunks (unless the chunk boundary would accidentally be right in the middel of my short match string). I've done a very similar setup proxying and modification of a simple website (vandeenensupport.com), and that works perfectly. I have also noticed that when I add a 'print(ngx.arg[1])' in the first line of the lua section of my example, the html replacement works reliably, no more empty ngx.arg[1]! But that print only goes into the nginx logging, so maybe it's only its timing that has some effect? So I still don't understand it. Thanks for all your good work on nginx. Bart ________________________________________ From: nginx-bounces at nginx.org [nginx-bounces at nginx.org] on behalf of agentzh [agentzh at gmail.com] Sent: Wednesday, November 28, 2012 12:43 AM To: nginx at nginx.org Subject: Re: nginx+lua reverse proxy empty body Hello! On Tue, Nov 27, 2012 at 2:28 AM, Bart van Deenen wrote: > The problem I have basically that the ngx.arg[1] is an empty string > (sometimes, timing dependent?) on url's that are definitely not empty. > It is normal that ngx.arg[1] is an empty string in the body filters when the upstream module generates "pure special bufs" like those with only the "last_buf" flag set (i.e., the eof flag set on the Lua land). It's normal that for a given response, the output body filter gets called multiple times because that's exactly how streaming processing works in Nginx (you surely do not want to buffer all the data at a time for huge responses). And the response body may be fed into your body filter in multiple data chunks. You should always be prepared for that in your Lua code. Please refer to the documentation for body_filter_by_lua for more information: http://wiki.nginx.org/HttpLuaModule#body_filter_by_lua BTW, doing simple regex match in body filters may not always work as expected because the nginx upstream module may split the response body into chunks in an arbitrary way (e.g., splitting in the middle of the word "Speel", for example). I've been working on the sregex C library that will support streaming match just like Ragel: https://github.com/agentzh/sregex It's still in progress though but it'll soon be usable on the Lua land :) Best regards, -agentzh _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From howachen at gmail.com Wed Nov 28 16:08:12 2012 From: howachen at gmail.com (howard chen) Date: Thu, 29 Nov 2012 00:08:12 +0800 Subject: having lot of waiting connection will cause high CPU usage? In-Reply-To: <1354044938814-7582673.post@n2.nabble.com> References: <1354044938814-7582673.post@n2.nabble.com> Message-ID: On Wed, Nov 28, 2012 at 3:35 AM, antituhan wrote: > Could you post 10 line of this command : > > > Which command? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at gmail.com Wed Nov 28 16:12:57 2012 From: dewanggaba at gmail.com (antituhan) Date: Wed, 28 Nov 2012 08:12:57 -0800 (PST) Subject: having lot of waiting connection will cause high CPU usage? In-Reply-To: References: <1354044938814-7582673.post@n2.nabble.com> Message-ID: <1354119177779-7582708.post@n2.nabble.com> Type *vmstat 1* ----- [daemon at antituhan.com ~]# -- View this message in context: http://nginx.2469901.n2.nabble.com/having-lot-of-waiting-connection-will-cause-high-CPU-usage-tp7582492p7582708.html Sent from the nginx mailing list archive at Nabble.com. From chris+nginx at schug.net Wed Nov 28 18:00:00 2012 From: chris+nginx at schug.net (Christoph Schug) Date: Wed, 28 Nov 2012 19:00:00 +0100 Subject: nginx-1.3.9 In-Reply-To: <201211281805.29564.vbart@nginx.com> References: <20121127142648.GS40452@mdounin.ru> <201211281454.02955.vbart@nginx.com> <20121128121416.GF40452@mdounin.ru> <201211281805.29564.vbart@nginx.com> Message-ID: <7e7d69bfa5e4b351840a161549566ca8@schug.net> On 2012-11-28 15:05, Valentin V. Bartenev wrote: > On Wednesday 28 November 2012 16:14:16 Maxim Dounin wrote: [...] >> There is more than one patch(1) implementations floating around, >> and I wouldn't be surprised fuzz in one of them will be reject in >> another one. > > Ok, you're right, I haven't been thought that this time the problem > is so serious. > > The patch was updated: > http://nginx.org/patches/spdy/patch.spdy-54.txt > > wbr, Valentin V. Bartenev Sorry for the noise, being a lazy admin I was a victim of my own automation. I gave up compiling software manually a long time ago and rather roll my own RPMs for anything. It was my oversight that RPM invokes patch(1) with argument "--fuzz=0" by default (as defined in the stock RPM macros of CentOS 6). Will keep an eye on that in the future. The new patch.spdy-54.txt applies cleany. Maxim, thanks for the pointer, and thanks for the patch, Valentin :-) Cheers -cs From kworthington at gmail.com Wed Nov 28 21:15:19 2012 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 28 Nov 2012 16:15:19 -0500 Subject: [nginx-announce] nginx-1.3.9 In-Reply-To: <20121127142655.GT40452@mdounin.ru> References: <20121127142655.GT40452@mdounin.ru> Message-ID: Hello Nginx Users, Now available: Nginx 1.3.9 For Windows http://goo.gl/a6UH8 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Tue, Nov 27, 2012 at 9:26 AM, Maxim Dounin wrote: > Changes with nginx 1.3.9 27 Nov > 2012 > > *) Feature: support for chunked transfer encoding while reading client > request body. > > *) Feature: the $request_time and $msec variables can now be used not > only in the "log_format" directive. > > *) Bugfix: cache manager and cache loader processes might not be able > to > start if more than 512 listen sockets were used. > > *) Bugfix: in the ngx_http_dav_module. > > > -- > Maxim Dounin > http://nginx.com/support.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccmbrulak at gmail.com Wed Nov 28 22:27:44 2012 From: ccmbrulak at gmail.com (Chris Brulak) Date: Wed, 28 Nov 2012 15:27:44 -0700 Subject: Error compiling 1.3.8 (or .4 or .7) with SPDY patch Message-ID: <8CB2581792D04B87ACC90F41AC57F3CB@gmail.com> On ubuntu 10.04. (AWS AMI) Background: I'm trying to setup a rails server to host nginx + SPDY on an AWS AMI with Ubuntu 10.04. After patching the source tree with the SPDY path I keep getting this error: src/http/ngx_http_request_body.c: In function ?ngx_http_read_client_request_body?: src/http/ngx_http_request_body.c:51: error: ?rc? undeclared (first use in this function) src/http/ngx_http_request_body.c:51: error: (Each undeclared identifier is reported only once src/http/ngx_http_request_body.c:51: error: for each function it appears in.) src/http/ngx_http_request_body.c:52: error: label ?done? used but not defined not sure what else to post, so below are some other things that I thought might be useful. I'm confused because it seems like so many others aren't running into compile issues so it makes me think that maybe I'm missing something. So any help is greatly appreciated. I'm also passing in the path to OpenSSL 1.0.1 (see the configure command below). Thanks Chris Gcc: Using built-in specs. Target: i686-apple-darwin11 Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.11~67/src/configure --disable-checking --enable-werror --prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~67/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1 Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) nginx version can be 1.3.4 or 1.3.7 or 1.3.8 configure command: ./configure --with-cc-opt=-Wno-error --prefix=/opt/nginx --user=nginx --group=nginx --with-http_ssl_module --with-openssl=/usr/local/build/openssl-1.0.1c -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Nov 29 01:32:17 2012 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 29 Nov 2012 05:32:17 +0400 Subject: Error compiling 1.3.8 (or .4 or .7) with SPDY patch In-Reply-To: <8CB2581792D04B87ACC90F41AC57F3CB@gmail.com> References: <8CB2581792D04B87ACC90F41AC57F3CB@gmail.com> Message-ID: <201211290532.17365.vbart@nginx.com> On Thursday 29 November 2012 02:27:44 Chris Brulak wrote: > On ubuntu 10.04. (AWS AMI) > Background: > I'm trying to setup a rails server to host nginx + SPDY on an AWS AMI with > Ubuntu 10.04. > > After patching the source tree with the SPDY path I keep getting this > error: > > src/http/ngx_http_request_body.c: In function > ?ngx_http_read_client_request_body?: src/http/ngx_http_request_body.c:51: > error: ?rc? undeclared (first use in this function) > src/http/ngx_http_request_body.c:51: error: (Each undeclared identifier is > reported only once src/http/ngx_http_request_body.c:51: error: for each > function it appears in.) src/http/ngx_http_request_body.c:52: error: label > ?done? used but not defined [...] > nginx version can be 1.3.4 or 1.3.7 or 1.3.8 [...] The latest patch only suitable for 1.3.9. If you want to compile 1.3.8 with SPDY, then you should use http://nginx.org/patches/spdy/patch.spdy-53.txt wbr, Valentin V. Bartenev -- http://nginx.com/support.html http://nginx.org/en/donation.html From sheng.zheng at etimestech.jp Thu Nov 29 02:27:40 2012 From: sheng.zheng at etimestech.jp (Sheng.Zheng) Date: Thu, 29 Nov 2012 11:27:40 +0900 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <20121128143014.GJ40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> <50B5E61D.3050902@etimestech.jp> <20121128143014.GJ40452@mdounin.ru> Message-ID: <50B6C81C.5000800@etimestech.jp> Hi,Maxim Here is the sample file. http://dl.dropbox.com/u/8837018/180379674-ff-fs.mp4 Thank you. Sheng On 2012/11/28 23:30, Maxim Dounin wrote: > Hello! > > On Wed, Nov 28, 2012 at 07:23:25PM +0900, Sheng.Zheng wrote: > >> Thank you for the patch. I can pseudo-streaming H.264/MP3 files on >> 1.2.5 now. But I just noticed that still some of H.264/MP3 files >> converted by FFmpeg has no sound even testing on 1.3.9. I can send >> you a sample file if you need it. > > Yes, please provide links to a couple of sample files which > has problems with streaming. > From ccmbrulak at gmail.com Thu Nov 29 02:46:01 2012 From: ccmbrulak at gmail.com (Chris Brulak) Date: Wed, 28 Nov 2012 19:46:01 -0700 Subject: Error compiling 1.3.8 (or .4 or .7) with SPDY patch In-Reply-To: <201211290532.17365.vbart@nginx.com> References: <8CB2581792D04B87ACC90F41AC57F3CB@gmail.com> <201211290532.17365.vbart@nginx.com> Message-ID: <17A2176F61F34DF794CCC2FA7D54D0A0@gmail.com> ok. I'll try that right now. Thanks for the quick reply. On Wednesday, 28 November, 2012 at 6:32 PM, Valentin V. Bartenev wrote: > On Thursday 29 November 2012 02:27:44 Chris Brulak wrote: > > On ubuntu 10.04. (AWS AMI) > > Background: > > I'm trying to setup a rails server to host nginx + SPDY on an AWS AMI with > > Ubuntu 10.04. > > > > After patching the source tree with the SPDY path I keep getting this > > error: > > > > src/http/ngx_http_request_body.c: In function > > ?ngx_http_read_client_request_body?: src/http/ngx_http_request_body.c:51: > > error: ?rc? undeclared (first use in this function) > > src/http/ngx_http_request_body.c:51: error: (Each undeclared identifier is > > reported only once src/http/ngx_http_request_body.c:51: error: for each > > function it appears in.) src/http/ngx_http_request_body.c:52: error: label > > ?done? used but not defined > > > > [...] > > nginx version can be 1.3.4 or 1.3.7 or 1.3.8 > > [...] > > The latest patch only suitable for 1.3.9. If you want to compile 1.3.8 with > SPDY, then you should use http://nginx.org/patches/spdy/patch.spdy-53.txt > > wbr, Valentin V. Bartenev > > -- > http://nginx.com/support.html > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org (mailto:nginx at nginx.org) > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bart.vanDeenen at spilgames.com Thu Nov 29 09:51:09 2012 From: Bart.vanDeenen at spilgames.com (Bart van Deenen) Date: Thu, 29 Nov 2012 09:51:09 +0000 Subject: nginx+lua reverse proxy empty body (weird repeatable behavior) Message-ID: Hi all It seems the body_filter_by_lua section is bypassed somehow, it just doesn't receive the proxied data. I have this simplified testcase which at least for my setup is completely repeatable. See below. In short, I do a curl call to localhost:9001, which gets the proxied server www.spelletjes.nl, and I correctly receive the index file including the "Speel" string that I'm trying to replace. But the lua section only says: "empty string". Once! So how did the server data get through to my curl call? Using openresty 1.2.4.7 (nginx 1.2.4 + ngx_lua-0.7.4 and more). The nginx.conf shown below is all there is, I am not using anything else. I could really use some help here. Thanks Bart van Deenen nginx.conf: worker_processes 1; error_log logs/error.log debug; events { worker_connections 1024; } http { server { client_body_in_single_buffer on; listen 9001; location / { proxy_pass http://www.spelletjes.nl:80; proxy_set_header X-Real-IP $remote_addr; body_filter_by_lua ' if ngx.arg[1] ~= "" then print ( "The body is " .. ngx.arg[1]) ngx.arg[1] = string.gsub(ngx.arg[1], "Speel", "NGINX") else print(ngx.var.uri .. " has empty body" .. ngx.arg[1]) end '; } } } curl call: curl -v -o spelletjes.html http://localhost:9001/ * About to connect() to localhost port 9001 (#0) * Trying 127.0.0.1... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* connected * Connected to localhost (127.0.0.1) port 9001 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.27.0 > Host: localhost:9001 > Accept: */* > * additional stuff not fine transfer.c:1037: 0 0 * HTTP 1.1 or later with persistent connection, pipelining supported < HTTP/1.1 200 OK < Server: ngx_openresty/1.2.4.9 < Date: Thu, 29 Nov 2012 09:37:35 GMT < Content-Type: text/html; charset=UTF-8 < Content-Length: 149577 < Connection: keep-alive < RTSS: 1-1297-2 < Vary: Accept-Encoding < Cache-Control: max-age=3600 < P3P: CP="IDC DSP COR CURa ADMa OUR IND PHY ONL COM STA" < P3P: policyref="http://www.spelletjes.nl/w3c/p3p.xml", CP="DSP IDC CUR ADM PSA PSDi OTPi DELi STP NAV COM UNI INT PHY DEM " < X-Id: 066 < Expires: Thu, 29 Nov 2012 10:29:59 GMT < Last-Modified: Thu, 29 Nov 2012 09:29:59 GMT < { [data not shown] 100 146k 100 146k 0 0 3163k 0 --:--:-- --:--:-- --:--:-- 3246k * Connection #0 to host localhost left intact * Closing connection #0 error.log: 1 2012/11/29 10:37:29 [notice] 4013#0: using the "epoll" event method 2 2012/11/29 10:37:29 [notice] 4013#0: start worker processes 3 2012/11/29 10:37:29 [notice] 4013#0: start worker process 4218 4 2012/11/29 10:37:29 [notice] 4013#0: signal 17 (SIGCHLD) received 5 2012/11/29 10:37:29 [notice] 4013#0: worker process 4215 exited with code 0 6 2012/11/29 10:37:29 [notice] 4013#0: signal 29 (SIGIO) received 7 2012/11/29 10:37:35 [notice] 4218#0: *65 [lua] variations.lua:67: / has empty body while sending to client, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://212.72.60.182:80/", host: "localhost: 9001" 8 2012/11/29 10:37:35 [info] 4218#0: *65 client 127.0.0.1 closed keepalive connection (104: Connection reset by peer) And the output of the curl call (the local file spelletjes.html) does contain the correct html (from the proxied server, including the string 'Speel' that I'm trying to match on). From Bart.vanDeenen at spilgames.com Thu Nov 29 10:28:42 2012 From: Bart.vanDeenen at spilgames.com (Bart van Deenen) Date: Thu, 29 Nov 2012 10:28:42 +0000 Subject: nginx+lua reverse proxy empty body (NOT weird repeatable behavior) Message-ID: Hi all I would love to retract the last post. The curl call does work (I used another nginx config file than the one I thought I was using). :-( So now I feel dumb. I still have the same problem when I use a browser (with cache disabled), and I'm now trying to use wireshark to see what is actually happening. Thanks for nginx and all the good work Bart From Bart.vanDeenen at spilgames.com Thu Nov 29 10:37:53 2012 From: Bart.vanDeenen at spilgames.com (Bart van Deenen) Date: Thu, 29 Nov 2012 10:37:53 +0000 Subject: does body_filter_by_lua get called when proxied server returns gzipped data? Message-ID: Hi I posted the recent thread on problems with body_filter_by_lua, and I think it finally has to do with the fact that my browser allows gzipped data, and the server in my example returns gzipped data. I only found out using wireshark. Could it be that the gzipped data don't even reach the lua code? Thanks Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bart.vanDeenen at spilgames.com Thu Nov 29 11:14:10 2012 From: Bart.vanDeenen at spilgames.com (Bart van Deenen) Date: Thu, 29 Nov 2012 11:14:10 +0000 Subject: Solved! body_filter_by_lua issues Message-ID: Hi all I got my modifying reverse proxy working by adding proxy_set_header Accept-Encoding ''; before my body_filter_by_lua code block. This way I force the proxy to give me uncompressed data, and then the lua code works perfectly. I'll add a gzip on; at the end, to pass the data to the browser compressed. Thanks Bart ________________________________ From: nginx-bounces at nginx.org [nginx-bounces at nginx.org] on behalf of Bart van Deenen [Bart.vanDeenen at spilgames.com] Sent: Thursday, November 29, 2012 11:37 AM To: nginx at nginx.org Subject: does body_filter_by_lua get called when proxied server returns gzipped data? Hi I posted the recent thread on problems with body_filter_by_lua, and I think it finally has to do with the fact that my browser allows gzipped data, and the server in my example returns gzipped data. I only found out using wireshark. Could it be that the gzipped data don't even reach the lua code? Thanks Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Nov 29 12:05:26 2012 From: nginx-forum at nginx.us (goelvivek) Date: Thu, 29 Nov 2012 07:05:26 -0500 Subject: Nginx Filed Descriptor are increasing in n*n space complexity Message-ID: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> HI, If I increase nginx worker count. File descriptor count are getting increased in n*n oder. Example with 10 worker. 2 0xffff8801c4dec000 10 0xffff8801c4dec300 10 0xffff8801c4dec600 2 0xffff8801c4dec900 10 0xffff8801c4decc00 10 0xffff8801c4decf00 2 0xffff8801c4ded200 2 0xffff8801c4ded500 10 0xffff8801c4ded800 2 0xffff8801c4dedb00 10 0xffff8801c4dee100 2 0xffff8801c4dee400 10 0xffff8801c4dee700 2 0xffff8801c4deea00 10 0xffff8801c4deed00 2 0xffff8801c4def000 10 0xffff8801c4def300 2 0xffff8801c4def600 10 0xffff8801c4def900 2 0xffff8801c4defc00 Is there a way I can decrease this count ? I Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233374,233374#msg-233374 From mdounin at mdounin.ru Thu Nov 29 12:12:53 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 29 Nov 2012 16:12:53 +0400 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <50B6C81C.5000800@etimestech.jp> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> <50B5E61D.3050902@etimestech.jp> <20121128143014.GJ40452@mdounin.ru> <50B6C81C.5000800@etimestech.jp> Message-ID: <20121129121253.GL40452@mdounin.ru> Hello! On Thu, Nov 29, 2012 at 11:27:40AM +0900, Sheng.Zheng wrote: > Hi,Maxim > Here is the sample file. > http://dl.dropbox.com/u/8837018/180379674-ff-fs.mp4 This file plays without sound in flash player here even without nginx pseudostreaming enabled, so I would suppose it's not an nginx fault. (When playing with mplayer and/or Chrome directly I can hear sound on both unmodified file and with pseudostreaming, i.e. with "?start=228.915" added manually.) > > Thank you. > > Sheng > > > On 2012/11/28 23:30, Maxim Dounin wrote: > >Hello! > > > >On Wed, Nov 28, 2012 at 07:23:25PM +0900, Sheng.Zheng wrote: > > > >>Thank you for the patch. I can pseudo-streaming H.264/MP3 files on > >>1.2.5 now. But I just noticed that still some of H.264/MP3 files > >>converted by FFmpeg has no sound even testing on 1.3.9. I can send > >>you a sample file if you need it. > > > >Yes, please provide links to a couple of sample files which > >has problems with streaming. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.com/support.html From nginx-forum at nginx.us Thu Nov 29 12:22:37 2012 From: nginx-forum at nginx.us (goelvivek) Date: Thu, 29 Nov 2012 07:22:37 -0500 Subject: Nginx Filed Descriptor are increasing in n*n space complexity In-Reply-To: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> References: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0e8ea5488f30ed29457d004b303a83ee.NginxMailingListEnglish@forum.nginx.org> First is count second is DEVICE Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233374,233376#msg-233376 From nginx-forum at nginx.us Thu Nov 29 12:36:48 2012 From: nginx-forum at nginx.us (goelvivek) Date: Thu, 29 Nov 2012 07:36:48 -0500 Subject: Nginx File Descriptor are increasing in n*n space complexity In-Reply-To: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> References: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <68afdc078e173399f017be14eeb84645.NginxMailingListEnglish@forum.nginx.org> sorry wrong title. It should be : Nginx File Descriptor are increasing in n*n space complexity Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233374,233377#msg-233377 From igor at sysoev.ru Thu Nov 29 13:09:00 2012 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 29 Nov 2012 17:09:00 +0400 Subject: Nginx Filed Descriptor are increasing in n*n space complexity In-Reply-To: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> References: <1f3784db8115db7e27f9c252a78080c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6E8B31A7-3628-41C1-8B9A-53AB6729D435@sysoev.ru> On Nov 29, 2012, at 16:05 , goelvivek wrote: > HI, > If I increase nginx worker count. File descriptor count are getting > increased in n*n oder. > Example with 10 worker. > 2 0xffff8801c4dec000 > 10 0xffff8801c4dec300 > 10 0xffff8801c4dec600 > 2 0xffff8801c4dec900 > 10 0xffff8801c4decc00 > 10 0xffff8801c4decf00 > 2 0xffff8801c4ded200 > 2 0xffff8801c4ded500 > 10 0xffff8801c4ded800 > 2 0xffff8801c4dedb00 > 10 0xffff8801c4dee100 > 2 0xffff8801c4dee400 > 10 0xffff8801c4dee700 > 2 0xffff8801c4deea00 > 10 0xffff8801c4deed00 > 2 0xffff8801c4def000 > 10 0xffff8801c4def300 > 2 0xffff8801c4def600 > 10 0xffff8801c4def900 > 2 0xffff8801c4defc00 > > > Is there a way I can decrease this count ? I These are unix socket pairs intended to communicate between worker processes. Actual number of sockets is N but not NxN because they are shared. Sockets file descriptors are indeed NxN, but the file descriptor is more cheap kernel object, than socket or inode/vnode objects. -- Igor Sysoev http://nginx.com/support.html From nginx-forum at nginx.us Thu Nov 29 19:14:44 2012 From: nginx-forum at nginx.us (Broham) Date: Thu, 29 Nov 2012 14:14:44 -0500 Subject: Cookie not created until refresh? Message-ID: <6cc49ad12bb1028e1b3f88d266bdfd41.NginxMailingListEnglish@forum.nginx.org> I have the following nginx conf file (shortened for the purose of this question) that is creating a cookie: map $http_referer $setCookie { default ""; ~*somewebsite\.com "referrer=bl;Domain=.mysite.com;Max-Age=31536000"; } server{ listen 80; server_name mysite.com www.mysite.com dev.mysite.com; root /var/www/mysite.com; access_log /var/log/nginx/mysite.com.log spiegle; add_header Set-Cookie $setCookie; location /{ #add_header Set-Cookie $setCookie; } } The issue is that if I click on a link on `somewebsite.com` it navigates to mysite, but does not create the cookie. If I then refresh the page it will create the cookie. Why do I have to refresh to have the cookie created and how can I get around this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233395,233395#msg-233395 From aweber at comcast.net Thu Nov 29 20:51:13 2012 From: aweber at comcast.net (AJ Weber) Date: Thu, 29 Nov 2012 15:51:13 -0500 Subject: GeoIP Country Filtering Message-ID: <50B7CAC1.3010903@comcast.net> Does anyone have any "best practices" on filtering using the GeoIP variables? I'd like to filter my entire site based upon "allowed countries" (it's a very specialized site), and am wondering the best place to check country-code so that it's efficient (performance wise), and applies to the entire http or server block. Any tips and tricks are very much appreciated! -AJ From nginx-forum at nginx.us Thu Nov 29 21:01:37 2012 From: nginx-forum at nginx.us (coral) Date: Thu, 29 Nov 2012 16:01:37 -0500 Subject: nginx listen to $hostname doesn't work, is it supported? Message-ID: n my nginx conf file, I have : listen 80; server_name $hostname; if I do netstat I see that it is listening on 0.0.0.0:80 , is there any workaround in nginx.conf , to make it listen to $hostname:80 ? where $hostname is coming from system ? I tried multiple combinations with no success so far. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233398,233398#msg-233398 From mdounin at mdounin.ru Thu Nov 29 23:25:00 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Nov 2012 03:25:00 +0400 Subject: Random order of configuration file reading In-Reply-To: <20121128141550.GI40452@mdounin.ru> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> <20121128141550.GI40452@mdounin.ru> Message-ID: <20121129232500.GY40452@mdounin.ru> Hello! On Wed, Nov 28, 2012 at 06:15:50PM +0400, Maxim Dounin wrote: > Hello! > > On Wed, Nov 28, 2012 at 01:27:04PM +0100, Antonio P.P. Almeida wrote: > > > > Hello! > > > > > I've already proposed removing GLOB_NOSORT to Igor a while ago. > > > His position on this is to keep this in sync with Windows version > > > where there is no sort guaranties. > > > > > > [...] > > > > Can't we just pass that as a config option so that the configure script > > detects the build environment and if were on *NIX the sorting is enabled? > > The codepath in question is unix-only, the question is about user > experience which will be different on unix and win32 with > GLOB_NOSORT removed on unix. > > Right now one can't rely on wildcard include ordering, and this is > consistent for all platforms supported. And "listen ... default" > should be used to mark default server if one uses wildcard include > to include multiple server blocks listening on the same some > ip:port. With the GLOB_NOSORT removed the behaviour will be > different on unix (included files will be sorted) and win32 > (included files are not guaranteed to be sorted), which is > considered bad. > > (I personally think that GLOB_NOSORT should be removed anyway. > I'll talk to Igor again about this.) I've discussed this with Igor, and this time he finnally approved removing GLOB_NOSORT. Committed, http://trac.nginx.org/nginx/changeset/4944/nginx Thanks to all for prodding this. -- Maxim Dounin http://nginx.com/support.html From agentzh at gmail.com Fri Nov 30 01:52:12 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 29 Nov 2012 17:52:12 -0800 Subject: nginx+lua reverse proxy empty body In-Reply-To: References: <50B495D8.8020106@spilgames.com> Message-ID: Hello! On Wed, Nov 28, 2012 at 7:28 AM, Bart van Deenen wrote: > Hi Agentz > Agentz is not my name, don't call me that. You can either call me agentzh or Yichun. > But wouldn't the statement > client_body_in_single_buffer on; > cause the whole body of the proxied server to go into ngx.arg[1] ? > client_body_in_single_buffer is for *request* bodies while body_filter_by_lua is for *response* bodies. Please do not confuse these two bodies. They're completely different things. > And I also don't understand that my example code shouldn't work reliably, even if the proxied data is passed through it in chunks (unless the chunk boundary would accidentally be right in the middel of my short match string). Yes, I mean exactly the case that the chunk boundary is in the middle of your string. It could happen. > I've done a very similar setup proxying and modification of a simple website (vandeenensupport.com), and that works perfectly. > Working 99.9% of the time can never imply 100% perfection :) This is just a caveat :) > I have also noticed that when I add a 'print(ngx.arg[1])' in the first line of the lua section of my example, the html replacement works reliably, no more empty ngx.arg[1]! ngx.arg[1] could be an empty string by design, as explained in my previous email. Always be prepared for that if you want your code works reliably. You can always reproduce a "special buf" (with empty data chunk) with ngx_lua's ngx.flush() and ngx.eof() primitives. > But that print only goes into the nginx logging, so maybe it's only its timing that has some effect? > Maybe. Best regards, -agentzh From robm at fastmail.fm Fri Nov 30 01:53:03 2012 From: robm at fastmail.fm (Robert Mueller) Date: Fri, 30 Nov 2012 12:53:03 +1100 Subject: 301 redirect with custom content problem Message-ID: <1354240383.10341.140661159923181.0F1A33B1@webmail.messagingengine.com> Hi I'm trying to setup a permanent redirect from http -> https. As a fallback for a small number of users where https is blocked, I'd like to show them a message. I thought an error_page 301 handler would allow me to do this, but I'm having trouble making it work: server { listen *:80 default; rewrite ^ https://example.com$uri permanent; } error_page 301 /foo/bar/301.html; Seems to confuse the redirect. It seems to completely replace the redirect Location header with foo/bar/301.html rather than actually serving that content. I tried some alternatives like: error_page 301 @301; location @301 { root /foo/bar/; try_files $uri /301.html; } But again, it replaces the Location header adding /301.html on the end of the redirect hostname rather than actually serving the 301.html content. Is there a reliable way to return custom content for a 301 redirect in a way that doesn't affect the redirect location itself? I'm using nginx 1.2.4. -- Rob Mueller robm at fastmail.fm From agentzh at gmail.com Fri Nov 30 02:15:44 2012 From: agentzh at gmail.com (agentzh) Date: Thu, 29 Nov 2012 18:15:44 -0800 Subject: nginx+lua reverse proxy empty body (weird repeatable behavior) In-Reply-To: References: Message-ID: Hello! On Thu, Nov 29, 2012 at 1:51 AM, Bart van Deenen wrote: > It seems the body_filter_by_lua section is bypassed somehow, it just doesn't receive the proxied data. I have this simplified testcase which at least for my setup is completely repeatable. See below. > With your (exact) code sample, I cannot see any issues with ngx_openresty 1.2.4.9 on Linux x86_64. For a *single* request to location / (note: not multiple!), I'm getting 26 non-empty data chunks: $ grep 'The body is' logs/error.log | wc -l 26 and also one empty data chunk (which is generated by ngx_proxy to solely signal the end of the body): $ grep 'has empty body' logs/error.log | wc -l 1 You can try to scan your error.log on your side. For big enough response bodies, it's common to see multiple data chunks emitted by ngx_proxy to the output body filter chain. And your body_filter_by_lua handler will be called multiple times, one time for one data chunk, for a *single* response. BTW, please post such questions to the openresty-en mailing list (https://groups.google.com/group/openresty-en ) instead so that I can see your mails sooner rather than later :) Best regards, -agentzh From julyclyde at gmail.com Fri Nov 30 02:33:06 2012 From: julyclyde at gmail.com (=?UTF-8?B?5Lu75pmT56OK?=) Date: Fri, 30 Nov 2012 10:33:06 +0800 Subject: about error_page and named location Message-ID: For rejecting some unfriendly access, I use 410 status code for them. The config is below: error_page 410 /410; if (xxx) { return 410; } location /410 { more_set_headers "Content-Type: text/html;charset=utf8;"; return 410 '$remote_addr ??????'; } When I access it for testing purpose, I got 410 Gone

410 Gone

The requested resource is no longer available on this server and there is no forwarding address. Please remove all references to this resource.


Powered by Tengine If I use error_page 410 @410; and location @410{...} , it works correctly, and serves a page with my own message. So, 1st, What's the differences between norma location and named location, in this context? 2nd, Are there better ways to serve a html page to unfriendly access, and use nginx's variables in the html? I tried return 410 '$remote_addr rejected' ,but it gave a application/octet-stream response, browser would download it instead of display the message in browser window. So I have to use more headers module to set Content-Type. -- Ren Xiaolei -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoweibin at gmail.com Fri Nov 30 03:18:15 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 30 Nov 2012 11:18:15 +0800 Subject: An error with the docs of $host Message-ID: Hi, In the docs of $host at http://nginx.org/en/docs/http/ngx_http_core_module.html#variables, it says: $host ?Host? request header field, or the server name matching a request if this field is not present It's not right with the host header like: www.example.com:1234. The $host variable always strips the port. -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ano at bestmx.ru Fri Nov 30 04:22:48 2012 From: ano at bestmx.ru (Andrey N. Oktyabrski) Date: Fri, 30 Nov 2012 08:22:48 +0400 Subject: Cookie not created until refresh? In-Reply-To: <6cc49ad12bb1028e1b3f88d266bdfd41.NginxMailingListEnglish@forum.nginx.org> References: <6cc49ad12bb1028e1b3f88d266bdfd41.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50B83498.60507@bestmx.ru> On 11/29/2012 11:14 PM, Broham wrote: - ~*somewebsite\.com "referrer=bl;Domain=.mysite.com;Max-Age=31536000"; + .somewebsite\.com "referrer=bl;Domain=.mysite.com;Max-Age=31536000"; From kartik.mistry at gmail.com Fri Nov 30 05:06:39 2012 From: kartik.mistry at gmail.com (Kartik Mistry) Date: Fri, 30 Nov 2012 10:36:39 +0530 Subject: Separate download directory for stable nginx releases? Message-ID: Hi, Is it possible to put stable releases of nginx into different directory/folder at nginx download page? Instead of: http://nginx.org/download/nginx-1.2.5.tar.gz Like: http://nginx.org/download/stable/nginx-1.2.5.tar.gz The reason it is helpful is that, we at various distributions (ie with my Debian hat on) can keep track of latest release using 'watch' file we use in Package Tracking System. Otherwise, one has to manually check (or watch constantly on mailing list). Thanks! -- Kartik Mistry | IRC: kart_ {0x1f1f, kartikm}.wordpress.com From zhuzhaoyuan at gmail.com Fri Nov 30 05:21:54 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Fri, 30 Nov 2012 13:21:54 +0800 Subject: Random order of configuration file reading In-Reply-To: <20121129232500.GY40452@mdounin.ru> References: <3f952971662c0cb1daadcc86593658b1.NginxMailingListEnglish@forum.nginx.org> <20121128122259.GG40452@mdounin.ru> <821496dfba8a21c02b9a6627d0fa099c.squirrel@damiao.org> <20121128141550.GI40452@mdounin.ru> <20121129232500.GY40452@mdounin.ru> Message-ID: Hi, On Fri, Nov 30, 2012 at 7:25 AM, Maxim Dounin wrote: > Hello! [...] > > I've discussed this with Igor, and this time he finnally approved > removing GLOB_NOSORT. Committed, > Cool! Thanks :) Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuzhaoyuan at gmail.com Fri Nov 30 05:32:12 2012 From: zhuzhaoyuan at gmail.com (Joshua Zhu) Date: Fri, 30 Nov 2012 13:32:12 +0800 Subject: nginx listen to $hostname doesn't work, is it supported? In-Reply-To: References: Message-ID: Hi, On Fri, Nov 30, 2012 at 5:01 AM, coral wrote: > n my nginx conf file, I have : > > listen 80; > server_name $hostname; > > if I do netstat I see that it is listening on 0.0.0.0:80 , is there any > workaround in nginx.conf , to make it listen to $hostname:80 ? where > $hostname is coming from system ? > > I tried multiple combinations with no success so far. > Why not just use 'localhost'? For example: listen localhost:80; Regards, -- Joshua Zhu Senior Software Engineer Server Platforms Team at Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From sheng.zheng at etimestech.jp Fri Nov 30 06:26:12 2012 From: sheng.zheng at etimestech.jp (Sheng.Zheng) Date: Fri, 30 Nov 2012 15:26:12 +0900 Subject: pseudo-streaming support for H.264/MP3 In-Reply-To: <20121129121253.GL40452@mdounin.ru> References: <20121127142648.GS40452@mdounin.ru> <072d3d41234850ce1c3f7674590e3ef6@schug.net> <201211280232.47028.vbart@nginx.com> <50B59915.1040609@etimestech.jp> <408DE964-30FE-471E-B0AE-2A6C2FFC3AAC@sysoev.ru> <50B5C274.2060703@etimestech.jp> <3CA467D2-FE9A-40AF-9578-E0EA7067E967@sysoev.ru> <50B5E61D.3050902@etimestech.jp> <20121128143014.GJ40452@mdounin.ru> <50B6C81C.5000800@etimestech.jp> <20121129121253.GL40452@mdounin.ru> Message-ID: <50B85184.2020904@etimestech.jp> Thanks,Maxim. The problem is in the meta data of the mp4 file. Flash player need a audio codec_tag_string .mp3 (0x2E6D7033) to indicate that the track contains MP3 audio data. But the ffmpeg add mp4a (0x6D703461) as audio audio codec_tag_string , and mp4a will indicate that the track contains AAC audio data to flash player. Sheng On 2012/11/29 21:12, Maxim Dounin wrote: > Hello! > > On Thu, Nov 29, 2012 at 11:27:40AM +0900, Sheng.Zheng wrote: > >> Hi,Maxim >> Here is the sample file. >> http://dl.dropbox.com/u/8837018/180379674-ff-fs.mp4 > > This file plays without sound in flash player here even without > nginx pseudostreaming enabled, so I would suppose it's not an > nginx fault. > > (When playing with mplayer and/or Chrome directly I can hear sound > on both unmodified file and with pseudostreaming, i.e. with > "?start=228.915" added manually.) > From ru at nginx.com Fri Nov 30 07:08:28 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 30 Nov 2012 11:08:28 +0400 Subject: An error with the docs of $host In-Reply-To: References: Message-ID: <20121130070828.GA26095@lo0.su> On Fri, Nov 30, 2012 at 11:18:15AM +0800, ??? wrote: > Hi, > > In the docs of $host at > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables, it says: > > $host > ?Host? request header field, or the server name matching a request if this > field is not present > > It's not right with the host header like: www.example.com:1234. The $host > variable always strips the port. How's this instead? %%% Index: xml/en/docs/http/ngx_http_core_module.xml =================================================================== --- xml/en/docs/http/ngx_http_core_module.xml (revision 775) +++ xml/en/docs/http/ngx_http_core_module.xml (working copy) @@ -2754,8 +2754,10 @@ $host -
Host
request header field, -or the server name matching a request if this field is not present +in this order of precedence: +host from the request line, or +host from the
Host
request header field, +or the server name matching a request
$hostname %%% From mdounin at mdounin.ru Fri Nov 30 07:43:24 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Nov 2012 11:43:24 +0400 Subject: 301 redirect with custom content problem In-Reply-To: <1354240383.10341.140661159923181.0F1A33B1@webmail.messagingengine.com> References: <1354240383.10341.140661159923181.0F1A33B1@webmail.messagingengine.com> Message-ID: <20121130074324.GC40452@mdounin.ru> Hello! On Fri, Nov 30, 2012 at 12:53:03PM +1100, Robert Mueller wrote: > Hi > > I'm trying to setup a permanent redirect from http -> https. As a > fallback for a small number of users where https is blocked, I'd like to > show them a message. > > I thought an error_page 301 handler would allow me to do this, but I'm > having trouble making it work: > > server { > listen *:80 default; > rewrite ^ https://example.com$uri permanent; > } > > error_page 301 /foo/bar/301.html; > > Seems to confuse the redirect. It seems to completely replace the > redirect Location header with foo/bar/301.html rather than actually > serving that content. This way you'll end up with two 301 redirects due to rewrite being executed again for /foo/bar/301.html. Try this instead: server { listen 80 default; location / { error_page 301 /foo/bar/301.html; return 301 "https://example.com$request_uri"; } location = /foo/bar/301.html { # static } } -- Maxim Dounin http://nginx.com/support.html From mdounin at mdounin.ru Fri Nov 30 07:51:52 2012 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 30 Nov 2012 11:51:52 +0400 Subject: about error_page and named location In-Reply-To: References: Message-ID: <20121130075152.GD40452@mdounin.ru> Hello! On Fri, Nov 30, 2012 at 10:33:06AM +0800, ??? wrote: > For rejecting some unfriendly access, I use 410 status code for them. The > config is below: > > error_page 410 /410; > if (xxx) { > return 410; > } > > location /410 { > more_set_headers "Content-Type: text/html;charset=utf8;"; > return 410 '$remote_addr ??????'; > } > > When I access it for testing purpose, I got > > > 410 Gone > >

410 Gone

>

The requested resource is no longer available on this server and there > is no forwarding address. Please remove all references to this > resource.


Powered by Tengine > > > > If I use error_page 410 @410; and location @410{...} , it works correctly, > and serves a page with my own message. > > So, > 1st, What's the differences between norma location and named location, in > this context? Named location is looked up directly, without executing server-level rewrite module directives again (which in your above config will return another 410 error, resulting in default error being returned). > 2nd, Are there better ways to serve a html page to unfriendly access, and > use nginx's variables in the html? I tried return 410 '$remote_addr > rejected' ,but it gave a application/octet-stream response, browser would > download it instead of display the message in browser window. So I have to > use more headers module to set Content-Type. Use text extension to be matched by mime.types, or set default_type correctly for the location in question (http://nginx.org/r/default_type). -- Maxim Dounin http://nginx.com/support.html From ru at nginx.com Fri Nov 30 10:03:18 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 30 Nov 2012 14:03:18 +0400 Subject: about error_page and named location In-Reply-To: References: Message-ID: <20121130100318.GB26095@lo0.su> On Fri, Nov 30, 2012 at 10:33:06AM +0800, ??? wrote: > For rejecting some unfriendly access, I use 410 status code for them. The > config is below: > > error_page 410 /410; > if (xxx) { > return 410; > } > > location /410 { > more_set_headers "Content-Type: text/html;charset=utf8;"; > return 410 '$remote_addr ??????'; > } > > When I access it for testing purpose, I got > > > 410 Gone > >

410 Gone

>

The requested resource is no longer available on this server and there > is no forwarding address. Please remove all references to this > resource.


Powered by Tengine > > > > If I use error_page 410 @410; and location @410{...} , it works correctly, > and serves a page with my own message. > > So, > 1st, What's the differences between norma location and named location, in > this context? This is due to implementation differences in handling internal redirects and named locations. When you "return 410" on the server level, "error_page 410 /410" does an internal redirect to location "/410". This is processed as a new request, including running ngx_http_rewrite_module directives specified on the server level, thus "return 410" fires again. Because "recursive_error_pages" is off, standard error page is returned. (If you turn it on, you'll get 500 instead, due to redirection cycling.) "error_page 410 @410", on the other hand, does a redirect straight to a named location "@410", thus the processing skips "server rewrites" and "find configuration" phases of processing, and you get a custom body for 410 as expected. You can avoid these differences by moving what you now have on the server level to "location /". > 2nd, Are there better ways to serve a html page to unfriendly access, and > use nginx's variables in the html? I tried return 410 '$remote_addr > rejected' ,but it gave a application/octet-stream response, browser would > download it instead of display the message in browser window. So I have to > use more headers module to set Content-Type. Content-Type is controlled by http://nginx.org/r/default_type and http://nginx.org/r/types directives, for example: : server { : default_type application/octet-stream; : types { : text/html html; : } : : error_page 403 @error; : : location @error { : default_type text/plain; : types {} : : return 200 'error $status\n'; : } : } : $ echo test > html/test.html : $ curl -i http://localhost:8000/test.html : HTTP/1.1 200 OK : Server: nginx/1.3.9 : Date: Fri, 30 Nov 2012 09:53:35 GMT : Content-Type: text/html : Content-Length: 5 : Last-Modified: Fri, 30 Nov 2012 09:53:31 GMT : Connection: keep-alive : ETag: "50b8821b-5" : Accept-Ranges: bytes : : test : $ chmod u-r html/test.html : $ curl -i http://localhost:8000/test.html : HTTP/1.1 403 Forbidden : Server: nginx/1.3.9 : Date: Fri, 30 Nov 2012 09:53:46 GMT : Content-Type: text/plain : Content-Length: 10 : Connection: keep-alive : : error 403 See also http://nginx.org/en/docs/http/ngx_http_ssi_module.html From yaoweibin at gmail.com Fri Nov 30 10:05:42 2012 From: yaoweibin at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Fri, 30 Nov 2012 18:05:42 +0800 Subject: An error with the docs of $host In-Reply-To: <20121130070828.GA26095@lo0.su> References: <20121130070828.GA26095@lo0.su> Message-ID: Hi Ruslan, Most of the content is fine for me. Could you add a line like that the variable $host exclude the port explicitly? I know the host should mean the server name, not include the port. It just confuses me when I use the directive like this: proxy_set_header Host $host; But the port is missing. Maybe I shoud use the $http_host instead. Thanks. 2012/11/30 Ruslan Ermilov > On Fri, Nov 30, 2012 at 11:18:15AM +0800, ??? wrote: > > Hi, > > > > In the docs of $host at > > http://nginx.org/en/docs/http/ngx_http_core_module.html#variables, it > says: > > > > $host > > ?Host? request header field, or the server name matching a request if > this > > field is not present > > > > It's not right with the host header like: www.example.com:1234. The > $host > > variable always strips the port. > > How's this instead? > > %%% > Index: xml/en/docs/http/ngx_http_core_module.xml > =================================================================== > --- xml/en/docs/http/ngx_http_core_module.xml (revision 775) > +++ xml/en/docs/http/ngx_http_core_module.xml (working copy) > @@ -2754,8 +2754,10 @@ > > $host > > -
Host
request header field, > -or the server name matching a request if this field is not present > +in this order of precedence: > +host from the request line, or > +host from the
Host
request header field, > +or the server name matching a request >
> > $hostname > %%% > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Fri Nov 30 10:36:07 2012 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 30 Nov 2012 14:36:07 +0400 Subject: An error with the docs of $host In-Reply-To: References: <20121130070828.GA26095@lo0.su> Message-ID: <20121130103607.GC26095@lo0.su> On Fri, Nov 30, 2012 at 06:05:42PM +0800, ??? wrote: > Hi Ruslan, > > Most of the content is fine for me. Could you add a line like that the > variable $host exclude the port explicitly? I know the host should mean the > server name, not include the port. : GET http://example.com:12345/uri HTTP/1.1 : Host: example.net:54321 results in $host being set to "example.com", while : GET /uri HTTP/1.1 : Host: example.net:54321 results in $host being set to "example.net", as required by http://tools.ietf.org/html/rfc2616#section-5.2 If I say about port stripping in case #2, should I say about requestURI stripping to extract the "host" part? I'd like to avoid detailing it too much, and in my opinion "host from the ..." fits both cases. > It just confuses me when I use the > directive like this: > > proxy_set_header Host $host; > > But the port is missing. That is understood. The previous description was incorrect. Thanks for noticing. > Maybe I shoud use the $http_host instead. It depends on what you need. > 2012/11/30 Ruslan Ermilov [...] > > How's this instead? > > > > %%% > > Index: xml/en/docs/http/ngx_http_core_module.xml > > =================================================================== > > --- xml/en/docs/http/ngx_http_core_module.xml (revision 775) > > +++ xml/en/docs/http/ngx_http_core_module.xml (working copy) > > @@ -2754,8 +2754,10 @@ > > > > $host > > > > -
Host
request header field, > > -or the server name matching a request if this field is not present > > +in this order of precedence: > > +host from the request line, or > > +host from the
Host
request header field, > > +or the server name matching a request > >
> > > > $hostname > > %%% From cherian.in at gmail.com Fri Nov 30 15:32:00 2012 From: cherian.in at gmail.com (Cherian Thomas) Date: Fri, 30 Nov 2012 21:02:00 +0530 Subject: Post mortem of a HackerNews Launch & references to mistakes made with Nginx config Message-ID: Hello family, I wrote about some of the mistakes I made while doing a HN launch for my new startup Cucumbertown http://www.gigpeppers.com/post-mortem-of-a-failed-hackernews-launch/ It has references to Nginx and how making a simple mistake in config cost me a server crash that lasted for 20-30 minutes. *Hope this will benefit some of the new Nginx users who are trying to scale. * On the positive side, if not for Nginx we wouldn?t have lasted 10 minutes of peak traffic at 1000 users/sec+. Besides that, this blog gigpeppers.com is hosted on a 128MB single core prgrmr.com machine and has been serving peak HackerNews front page traffic for almost 17 hours now , thanks to Nginx. http://news.ycombinator.com/item?id=4847665 Thank you Igor, Maxim, Antonio, Velentin and all other for this software marvel. - Cherian -------------- next part -------------- An HTML attachment was scrubbed... URL: From julyclyde at gmail.com Fri Nov 30 16:14:23 2012 From: julyclyde at gmail.com (=?GB2312?B?yM7P/sDa?=) Date: Sat, 1 Dec 2012 00:14:23 +0800 Subject: about error_page and named location In-Reply-To: <20121130100318.GB26095@lo0.su> References: <20121130100318.GB26095@lo0.su> Message-ID: Thank you all. There are still much to learn for me. -- ??????????????????? from iPad2 3G ? 2012/11/30???6:03?Ruslan Ermilov ??? > On Fri, Nov 30, 2012 at 10:33:06AM +0800, ??? wrote: >> For rejecting some unfriendly access, I use 410 status code for them. The >> config is below: >> >> error_page 410 /410; >> if (xxx) { >> return 410; >> } >> >> location /410 { >> more_set_headers "Content-Type: text/html;charset=utf8;"; >> return 410 '$remote_addr ??????'; >> } >> >> When I access it for testing purpose, I got >> >> >> 410 Gone >> >>

410 Gone

>>

The requested resource is no longer available on this server and there >> is no forwarding address. Please remove all references to this >> resource.


Powered by Tengine >> >> >> >> If I use error_page 410 @410; and location @410{...} , it works correctly, >> and serves a page with my own message. >> >> So, >> 1st, What's the differences between norma location and named location, in >> this context? > > This is due to implementation differences in handling internal redirects > and named locations. > > When you "return 410" on the server level, "error_page 410 /410" does > an internal redirect to location "/410". This is processed as a new > request, including running ngx_http_rewrite_module directives specified > on the server level, thus "return 410" fires again. Because > "recursive_error_pages" is off, standard error page is returned. > (If you turn it on, you'll get 500 instead, due to redirection cycling.) > > "error_page 410 @410", on the other hand, does a redirect straight to a > named location "@410", thus the processing skips "server rewrites" and > "find configuration" phases of processing, and you get a custom body > for 410 as expected. > > You can avoid these differences by moving what you now have on the > server level to "location /". > >> 2nd, Are there better ways to serve a html page to unfriendly access, and >> use nginx's variables in the html? I tried return 410 '$remote_addr >> rejected' ,but it gave a application/octet-stream response, browser would >> download it instead of display the message in browser window. So I have to >> use more headers module to set Content-Type. > > Content-Type is controlled by http://nginx.org/r/default_type and > http://nginx.org/r/types directives, for example: > > : server { > : default_type application/octet-stream; > : types { > : text/html html; > : } > : > : error_page 403 @error; > : > : location @error { > : default_type text/plain; > : types {} > : > : return 200 'error $status\n'; > : } > : } > > : $ echo test > html/test.html > : $ curl -i http://localhost:8000/test.html > : HTTP/1.1 200 OK > : Server: nginx/1.3.9 > : Date: Fri, 30 Nov 2012 09:53:35 GMT > : Content-Type: text/html > : Content-Length: 5 > : Last-Modified: Fri, 30 Nov 2012 09:53:31 GMT > : Connection: keep-alive > : ETag: "50b8821b-5" > : Accept-Ranges: bytes > : > : test > : $ chmod u-r html/test.html > : $ curl -i http://localhost:8000/test.html > : HTTP/1.1 403 Forbidden > : Server: nginx/1.3.9 > : Date: Fri, 30 Nov 2012 09:53:46 GMT > : Content-Type: text/plain > : Content-Length: 10 > : Connection: keep-alive > : > : error 403 > > See also http://nginx.org/en/docs/http/ngx_http_ssi_module.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Nov 30 16:59:17 2012 From: nginx-forum at nginx.us (gduarte) Date: Fri, 30 Nov 2012 11:59:17 -0500 Subject: nginx with --with-http_perl_module compilation problems Message-ID: Hello people, yesterday I started trying to compile nginx with http_perl_module, but, at the end of the compilation, the linkeditor gets confused about the archtecture it's being linked. I'm using Mac OS/X 10.6.8. Here is part of the output I get from terminal: gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE -I src/core -I src/event -I src/event/modules -I src/os/unix -I ../pcre-8.31 -I objs -I src/http -I src/http/modules -I src/http/modules/perl \ -o objs/src/http/modules/perl/ngx_http_perl_module.o \ src/http/modules/perl/ngx_http_perl_module.c /usr/libexec/gcc/powerpc-apple-darwin10/4.2.1/as: assembler (/usr/bin/../libexec/gcc/darwin/ppc/as or /usr/bin/../local/libexec/gcc/darwin/ppc/as) for architecture ppc not installed Installed assemblers are: /usr/bin/../libexec/gcc/darwin/x86_64/as for architecture x86_64 /usr/bin/../libexec/gcc/darwin/i386/as for architecture i386 lipo: can't open input file: /var/folders/QZ/QZCif6epHb85Dwl9qHtI9E+++TM/-Tmp-//ccbScbid.out (No such file or directory) make[1]: *** [objs/src/http/modules/perl/ngx_http_perl_module.o] Error 2 make: *** [build] Error 2 Someone have seem this previously? It seems it's a problem with the ppc linkedition, but I don't know why it is trying to link against ppc arch... Any advice is welcome :) PS: without the perl module, the compilation is clean... Thanks, Gabriel Duarte Posted at Nginx Forum: http://forum.nginx.org/read.php?2,233440,233440#msg-233440 From osa at FreeBSD.org.ru Fri Nov 30 17:13:13 2012 From: osa at FreeBSD.org.ru (Sergey A. Osokin) Date: Fri, 30 Nov 2012 21:13:13 +0400 Subject: nginx with --with-http_perl_module compilation problems In-Reply-To: References: Message-ID: <20121130171313.GA4302@FreeBSD.org.ru> Hi Gabriel, it looks like the problem because of your perl. Could you try different perl (modern version 5.16.2), compiled without ppc arch. On Fri, Nov 30, 2012 at 11:59:17AM -0500, gduarte wrote: > Hello people, > yesterday I started trying to compile nginx with http_perl_module, but, at > the end of the compilation, the linkeditor gets confused about the > archtecture it's being linked. I'm using Mac OS/X 10.6.8. Here is part of > the output I get from terminal: > > gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g > -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN > -fno-strict-aliasing -I/usr/local/include > -I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE -I src/core > -I src/event -I src/event/modules -I src/os/unix -I ../pcre-8.31 -I objs -I > src/http -I src/http/modules -I src/http/modules/perl \ > -o objs/src/http/modules/perl/ngx_http_perl_module.o \ > src/http/modules/perl/ngx_http_perl_module.c > /usr/libexec/gcc/powerpc-apple-darwin10/4.2.1/as: assembler > (/usr/bin/../libexec/gcc/darwin/ppc/as or > /usr/bin/../local/libexec/gcc/darwin/ppc/as) for architecture ppc not > installed > Installed assemblers are: > /usr/bin/../libexec/gcc/darwin/x86_64/as for architecture x86_64 > /usr/bin/../libexec/gcc/darwin/i386/as for architecture i386 > lipo: can't open input file: > /var/folders/QZ/QZCif6epHb85Dwl9qHtI9E+++TM/-Tmp-//ccbScbid.out (No such > file or directory) > make[1]: *** [objs/src/http/modules/perl/ngx_http_perl_module.o] Error 2 > make: *** [build] Error 2 > > Someone have seem this previously? It seems it's a problem with the ppc > linkedition, but I don't know why it is trying to link against ppc arch... > Any advice is welcome :) > > PS: without the perl module, the compilation is clean... -- Sergey A. Osokin osa at FreeBSD.org From francis at daoine.org Fri Nov 30 22:38:53 2012 From: francis at daoine.org (Francis Daly) Date: Fri, 30 Nov 2012 22:38:53 +0000 Subject: Is it possible using multiple directive on different root location? (Without Symlinks) In-Reply-To: <1354018101825-7582658.post@n2.nabble.com> References: <1335864986389-7516384.post@n2.nabble.com> <1335896925.4775.25.camel@portable-evil> <1335925217815-7518776.post@n2.nabble.com> <1335934594.4775.45.camel@portable-evil> <1336049994106-7523526.post@n2.nabble.com> <20120503224309.GB11895@craic.sysops.org> <1336712290522-7549205.post@n2.nabble.com> <20120511080357.GH457@craic.sysops.org> <1354018101825-7582658.post@n2.nabble.com> Message-ID: <20121130223853.GX18139@craic.sysops.org> On Tue, Nov 27, 2012 at 04:08:21AM -0800, antituhan wrote: Hi there, > Change $document_root$fastcgi_script_name into $document_root$1, but I still > got 403 Forbidden when access http://static.antituhan.com/test/tehbotol.php What file do you want the fastcgi server to process when you request http://static.antituhan.com/test/tehbotol.php? If it is /something/test/tehboto1.php, then put "root /something" inside the location{} block and use $document_root$fastcgi_script_name. If it isn't, then you'll need to build what it is in some other way. f -- Francis Daly francis at daoine.org